June 19, 1997
In 1994 James Billington, Librarian of Congress, identified as one of the Library's specific goals the digitization of five million images by the year 2000, using approximately two hundred of the Americana collections at the Library as an initial pool for conversion to digital format. This goal was one of several new initiatives falling under the Library's mission "to make its resources available and useful to the Congress and the American people and to sustain and preserve a universal collection of knowledge and creativity for future generations." Dr. Billington placed overall responsibility for accomplishing this goal with the National Digital Library Program (NDLP) uder the leadership of Laura Campbell.
In 1995 IBM approached the Library with a proposal to donate significant levels of hardware, software and staff resources to conduct a joint digital library project with the Library. Late in the third quarter of that year an agreement was signed between IBM and the Library.
The agreement identified the roles and responsibilities of the parties (the Library, IBM, and Case Western Reserve University). Responsibility for IBM project direction was given to Rebecca Spohn. The IBM Account Executive for the Library, Mary Henry, was responsible for local, ongoing support of this project. Herbert Becker, Director of ITS, held primary responsibility for the project within the Library.
Prior to and following formal approval of the agreement the parties held several meetings to identify project objectives and discuss logistical plans for conducting the effort. During this process some sharpening of focus occurred, and in late 1995 the parties began the actual work.
2. Project Objectives
The specific objective of the project, as outlined in the initial statement of work dated July 20, 1995 under a Agreement for Joint Study was "to digitize selections from the Federal Theatre Project (FTP) special collection and to make public domain images from this collection available on the Internet". This objective seems straight-forward and uncomplicated, however, the parties acknowledged there were numerous underlying issues that would present interesting technological, design, and intellectual challenges to both IBM and the Library. Although the concept of a digital library is not new, a standard, scalable, commercially available product to support this type of system does not exist. Moreover, the internal procedures for implementing and operating such a system in the Library did not exist.
2.1 The Library
First, the Library wished to investigate the feasibility of using IBM Digital Library software products to construct a standard, scalable system for the capture, storage and delivery of digital collections. Second, the Library wanted to work with IBM to identify and resolve technical, intellectual and procedural problems associated with the development and operation of a digital library system. Third, the Library wished to utilize this project, and especially the scanning hardware included by IBM in its donation, to digitize a portion of the collection known as The Federal Theatre Project. The Library was simultaneously initiating a project to digitize manuscript pages from the Federal Theatre Project (FTP) collection through another process, and was also interested in options for combining images and/or data from the two projects.
IBM is a recognized industry leader in the research, development, and implementation of digital library technology. To maintain its leadership in this industry, IBM relies on developing and maintaining partnerships with clients who are themselves, leaders in their industry. Through its participation in a digital library project with the Library, IBM intended to gather digital library requirements, to enhance their strategy for the development of technology, and to gain additional expertise in this complex arena.
IBM wanted to evaluate the performance of various components of the IBM solution within a "real life" environment. Although IBM introduced Digital Library in March, 1995, various components that the Library would be implementing, e.g., the AIX Object Server, were not yet commercially available at the time of this project's inception.
3. The Collection
In 1935, Harry Hopkins, Director of the U. S. Work Projects Administration, founded the Federal Theatre Project (FTP). The FTP was designed to reemploy theater workers on public relief during the Great Depression and to bring theater to people in America who had never before seen live theatrical performances.
The FTP collection (1932-1943) consists of various materials: correspondence, scripts, costume and set designs, posters, photographs, scrapbooks, and newspaper clippings. The Library of Congress received its collection over the years 1939-1946. The collection was organized and described at George Mason University.
The FTP was considered a good candidate for this project because of the variety of materials, the interest to scholars, and the likelihood of minimal copyright or use permissions issues.
The collection's curator, Walter Zvonchenko, worked with assigned staff from the Music Division and the Library's Information Technology Services to set up and perform the input processing. Staff from the Library's National Digital Library Program office assisted in areas related to retrieval and access, as well as standards and quality assurance for input processing.
The Library, working with an advisory group of interested scholars, selected three plays for the project: William Shakespeare's Macbeth, Christopher Marlowe's The Tragical History of Doctor Faustus, and the Living Newspaper entitled Power. The plays were selected on the basis of variety, known interest to scholars and the public, and expected lack of copyright issues. All three plays have a wide range of materials in the collection, with an estimated total of 2,000 digital images. The selected materials were stored in numbered containers in the Library. A few of the items, such as over-size posters, were considered fragile and required the care and assistance of staff from the Library's Preservation Office.
4. The System as Envisioned
The system envisioned by the planners called for materials from the Federal Theatre Project collection to be scanned on the PRO/3000 scanner in the Adams building. The scanned image file would be transmitted via the Library's TCP/IP network to an RS/6000 server in the Library's Madison Building computer room. Software on that server would create derivative images (i.e. those of lesser resolution for display and thumbnail use). All images would be loaded in the DB/2 and Visual Info databases. Software links would be created allowing a user to query an electronic version of the Finding Aid (or some other index) and gain access to individual or groups of images. The collection would be made available on the Internet to users with standard web browser software. A subset of the total collection would be made available to Case Western Reserve University for incorporation into a special "Scholar's Workstation" for the use of professors in the development of coursework.
5.1 Spring/Summer 1995
The Library hosted a series of meetings with various staff from IBM and Case Western Reserve University (CWRU) to discuss project objectives, issues, schedules, and expectations. During these meetings the Library devoted much time to describing the technical environment at LC and its digital library activities underway. The Library vigerously stressed its policy of adherence to standards and its interest in migrating UNIX based file collections to a commercially available, scalable platform.
5.2 Fall 1995
The Library constructed a dedicated scanning facility to specifications provided by staff from the Watson Laboratory of IBM. The scanning facility was designed to house the IBM PRO/3000 high-resolution scanner, a prominent item in the inventory of hardware donated by IBM to the Library of Congress. IBM installed the scanner and trained Library staff in the operation and maintenance of the scanner and the scanner control software (PISA).
5.3 Winter/Spring 1996
The Library began scanning operations with Music Division staff. The Library found that scanner features could enhance environmentally-damaged photographic negatives, and that the scanner provided versatility over a wide range of targets. IBM scanner support staff responded to questions and suggestions from Library staff, and made some engineering changes to improve specific aspects of the scanning process. To manage the images being scanned, Library staff developed a PC-based database to track the images and collect information about the images. Because the servers were not available, Library staff stored scanned images temporarily on RS6000 computers in the Madison Building computer room.
Library staff oriented IBM staff to issues of collection control and image naming conventions and standards being used, or under development, at the Library. Library staff were in the process of developing new naming standards for multiple aspects of digital collections, and used the Federal Theatre Project as one of the test areas for elimination of digital naming schemes based on file names.
5.4 Summer 1996
The Library began to develop the requirements for a WWW- based interface consistent with other National Digital Library projects. Library staff began working on a WWW "home page" and on a digital version of the collection's Finding Aid in the standard SGML language. Library staff and IBM began exploring requirements for image storage, archiving, and retrieval. IBM began working on custom programs to process and load the images from the temporary storage areas, and to load the data from the PC-based database management system developed by the Library. IBM began to configure two RS6000 machines for an AIX Library Server and for an AIX Object Server. IBM began to explore options for meeting the Library's requirements for a WWW-based retrieval interface.
5.5 Fall 1996
IBM Watson Lab provides software for processing the scanned images, and specifications for a process to create CD-ROMs for in-process image storage. The CD-ROMs could be used both to load the servers, and to store the archival versions of the images. This input workflow was refined over several months.
6. System As Built
In keeping with the Library's network strategy, the communications protocol utilized in the configuration was TCP/IP, with one exception, where a private LAN was isolated for high resolution scanning. IBM's RISC System/6000, running the AIX operating system was selected as the server(s) of choice for this solution. Internet users are viewed as the primary end-users of the designed solution and the system is geared toward their access of images via generic web browsers.
The infrastructure of the implemented system was IBM's Digital Library (DL), which is neither a product nor a single technology, but a group of technologies that together allow for the storage, management, and retrieval of digitized images over networks. Because of DL's modularity, comprised of a variety of existing IBM software and hardware products, customized development, consulting, and implementation services were applied from various IBM research, development, and service organizations to provide a solution based on the requirements of the Library.
The system, as implemented consists of three subsystems and several access workstations provided for various functions within the project. The three subsystems consist of:
Precision Scanning Subsystem,
Image Database Subsystem
6.2 High Resolution Scanning
This subsystem consists of an IBM-Research-developed Pro/3000 Scanner and an IBM PS/2 workstation, equipped with the IBM-Research developed PISA95 scanning application. All Precision Scanning Subsystem software runs on the OS/2 operating systems. The PISA95 application supports a two-display configuration that allows the user to control the program on the system display, while presenting high-quality images on an image display.
Within the Precision Scanning Subsystem, the IBM PS/2 workstation and the Pro/3000 Scanner communicate through a private Local Area Network that uses a non-TCP/IP Token Ring communications protocol (IBM LAN Requester); this private LAN will be physically isolated from the Library's network.
6.3 Image Database Subsystem
The Image Database Subsystem (IDBS) provides the image management and storage infrastructure of the system. This subsystem is implemented on two IBM RISC System/6000 workstations running the AIX operating system. This subsystem consists of two of IBM Digital Library solution components:
VisualInfo Library Server, and
VisualInfo Object Server.
6.4 VisualInfo Library Server
The Library Server utilizes IBM relational database technology to manage the system's contents and provides data integrity by performing the following functions:
Manage library data
Maintain index information
Control access to images stored in the Object Server
Specifically, the Library Server receives indexing data from the Precision Scanning Subsystem (through a Loader Program) and directs requests to the appropriate server to perform the following functions:
Store and update images stored in the Object Server, and
Updates the indices and descriptive information stored in the Library catalog.
In addition, the Library Server can perform queries that locate and retrieve images from the Object Server.
6.5 VisualInfo Object Server
The VI Object Server works in cooperation with the Library Server to maintain the images stored in the Image Database Subsystem. The Object Server receives image requests from the Liberia Server and executes those requests. As the system was implemented, only direct access storage devices (DASD) was used.
6.6 Internet Gateway
A RISC System/6000 was used as a gateway between users and the Digital Library's VisualInfo components in the Image Database Subsystem. The Internet Gateway performs the following functions:
Receives a HTTP request for a document (or object),
Translates the HTTP request into VisualInfo-compatible request(s)
Extracts the desired objects from the VisualInfo Library Server and Object Server
Adds any HTML information required
Provides the document (or object) with appropriate HTTP to the requester.
The Internet Gateway used the AIX cliette code, supplied by IBM Research (Almaden) to communicate with the VisualInfo components in the IDBS; additional code was written to adapt this code for usage in the Library's system.
A detailed view of the individual system components are outlined in the following section.
6.7 CREATE AND CAPTURE
When the project began, the Library had two basic sources of input: the physical containers with FTP materials and the Finding Aid in book format. There were no bibliographic catalog records, and no records or descriptive information for each item in the containers. Items were selected by the curator related to the three selected plays. Generally, these were grouped by item type and stored in box containers by type.
Items in the collection were digitized using the IBM Pro/3000, which was set up in special environment, with controlled lighting. were brought to a special room set up with the IBM Pro/3000 scanner. ITS and Music Division staff set up a scanning process, with a scan log using a Paradox database. Wherever possible, data elements and naming standards that were being established by the NDL program were incorporated.
6.8 Precision Scanning Subsystem (PSS)
The Pro/3000 scanner produced archival quality, high resolution color images from negatives and from reflective media in the collection. It is based on an IBM-proprietary charge-coupled device (CCD) imaging sensor chip that provides exceptional 12 bit dynamic range and superior noise performance. The CCD sensor moves across the image plane internally. This eliminates the need to move the camera or the media being scanned. The Pro/3000's digital camera is supported by a motorized column. A bellows is used to focus the digital cameras. Because of the limited height of the column, the largest original that can be scanned is 45 by 60 cm.
The PISA application works with the Pro/3000 to capture monochrome or color images. Its user interface displays both the proper digital camera height and the bellows position for proper focus for a number of sizes of originals, helping the scanner operator to more quickly position and focus the digital camera when the size of the original is changes. PISA also includes a utility that records the lighting pattern produced by the illumination. In the process of creating a scanned image PISA uses this information to correct the image, so that it appears as it would if the illumination were spatially uniform. PISA also includes a utility to analyze the scan of a color-calibrated test chart and determine the color characteristics of the scanner on the basis of that analysis. In the process of producing a scanned image, PISA uses this information to correct the colors of the scanned image so that they will appear correct.
For the purposes of this project, the Library set up a simple input database (using Paradox), to capture information on each item as it was scanned. This database served to both track the items and to collect any information about the image that the scanner operator noted from the item itself, e.g., a notation on the back of a photograph or a brief physical description. This database evolved into a mechanism for providing the metadata associated with the images.
6.10 Loader Program
The role of the loader is to automatically enter text, and associated images, into the system repository. The loader consists of two major components:
Loader Image Processor
Loader Execution Manager
The Loader Image Processor contains the COMIT95 component that generates the derivative images. COMIT95 is a batch processing program that, for each TIFF image produced, performs the following functions:
Reads the scanned (TIFF) image file,
Generates its derivative images, and
Stores the derivative images on an interim basis, in preparation for loading into the repository.
The derivative images produced by COMIT95 are specified by parameters and the following four derivative images are computed for each scan:
Derivative image one - A scan-resolution image with a maximum resolution of 3072 by 4000 pixels, compressed by (extended, lossless) JPEG, but stored in the TIFF file format
Derivative image two - A scan-resolution image with a maximum resolution of 3072 by 4000 pixels, in the (baseline, lossy) JPEG format
Derivative image three - A screen-resolution image with a maximum resolution of 1000 by 750 pixels in the (baseline, lossy) JPEG format
Derivative image four - A thumbnail image with a maximum resolution of 150 by 150 pixels, in the GIF format.
The derivative images proved useful for various purposes; some were intended to provide faster access while others were intended to provide greater detail. The derivative images were compressed to enhance system performance, reduce storage, and to facilitate their retrieval over the Internet.
The Loader Execution Manager's primary function is to insert metadata into the database and to load images into the Digital Library repository. The architecture of IBM's Digital Library allows the metadata to be separate from the data; thus, the metadata is stored in a relational database on the Library Server, while the images are stored in the Object Server in a filesystem managed by a second relational database application. At the Library, images are scanned and annotated. The annotation information is stored in a single-table (Paradox) database,
6.11 STORAGE MANAGEMENT
The core of the IBM Digital Library storage and management infrastructure is the Library Server which manages catalog information and provides pointers to the objects held in the collection. The Object Server contains the actual digitized content files of a digital library. This technology as implemented by the VisualInfo application which sends the images to the BLOB (Binary Large Object) store and updates its internal DB2 tables with the metadata.
Although IBM Digital Library provides for hierarchical storage management (hsm), this technology was not implemented for this project for several reasons.
Early on in the project there were discussions and issues raised about which images should be kept on-line on DASD and if optical technology was a viable option for storing images to be accessed over the Internet, where performance continues to be an issue.
Subsequent to the above discussions and pursuant to the Library's objective of digitizing five million objects by the year 2000, along with the decreasing price of on-line storage, the Library made plans to procure large amounts of on-line DASD. With the advent of less expensive DASD, the hierarchical storage system initially envisioned for the Library's digital library system was dramatically flattened.
The minimal number of objects selected for this pilot project (approximately 2,000) did not provide an scenario that inherently dictated the experimentation with hsm, while other issues were more pressing. Independent of this project, the Library is implementing ADSM on a wide-scale basis, including on-line DASD, tape, and optical technologies. It is envisioned that ADSM will be applied this digitization effort in the future.
6.12 Finding Aid
Since about 1950, "registers" have been the basic finding aids prepared to facilitate use of manuscript collections at the Library. Finding aids are being reproduced in the Standard Generalized Markup Language (SGML) format, via the World Wide Web (WWW) pages, for access to Library digitized collections. Because SGML requires that the user have an SGML browser, e.g., Panorama, a finding aid may also be made available in an indexed text format, using HTML.
The FTP "register" finding aid, prepared at George Mason University and published in a book format, has been the primary access tool to the collection. In outline format, it summarizes the items in the collection, organized by type of material, and then by physical container. This is being reproduced in SGML format by the Library Music Division staff, and is expected to be the key access tool to the digital FTP collection. The Library may also choose to make this Finding Aid available in HTML for simpler access.
6.13 Full-Text Search
The Library currently provides full-text search on documents in HTML format, via the Inquery search engine. The FTP collection does not have extensive text documents or descriptive database records appropriate for full-text search. Both the Finding Aid and the database records could be indexed for searching by a user, but these are of minimal value in this method because of the lack of unique terms.
6.14 Database Access
The Library has begun to pursue options to implement database management systems for storage and retrieval of text document, data elements, and digitized images. The FTP data and images were loaded into a VisualInfo DB2 database on an AIX platform. The data records provide the access to the images.
6.15 Bibliographic Catalog Links
The Library expects eventually to link the images to a collection-level bibliographic record, via the online finding aid.
6.16 DISTRIBUTION (World Wide Web Access)
The Library provides access to its digitized information via the Library's World Wide Web (WWW) home page. In its simplest form, the WWW pages, using the standard Hierarchical Text Markup Language (HTML) text format, provide text and inline graphics display. Links can be set up to other WWW pages, or to applications (such as databases) via programs.
Each digitized collection, such at FTP, has a WW home page linked from the Library's home page. The Music Division staff has produced a FTP home page, with introductory material, as well as a new text document that provides in-depth material about each of the plays. The FTP home page can link to either other HTML pages, or to any of several other options, such as Finding Aid, full-text search, or database access.
The Library is using and expanding its existing World Wide Web access tools for browsing, searching, and displaying data and images from its collections. The Library expects to integrate the FTP project with the storage and retrieval and display options that it uses for other digitized data, and is using the project to review and evaluate options.
7. LESSONS LEARNED
7.1 Input Issues
7.1.1 Input sources must be integrated into the processing
IBM's VisualInfo component of Digital Library had limited image loading functionality and it is only loosely integrated into the system. Loading an item into the collection proved to be a complex transaction, with various components on two separate workstations, with two different operating systems. To enhance usability and to consider this solution as a viable infrastructure for future digital library projects within the Library, the loader program needs to be more tightly integrated into the system and the workflow more clearly defined.
7.1.2 Multiple Input Sources
Most collections at the Library are digitized by contractors, specializing in various types of originals. Recently, in-house scanning experiments have been conducted using various scanners which produce multiple file format types. In order for the implemented system to be a comprehensive infrastructure for future digital library projects, the requirement to allow for input from multiple sources must be addressed.
7.1.3 In-Process Image Management
A more comprehensive guide to the management of in-process images is required. The process to scan images and process them before storage in the final database requires multiple steps. Some of the steps are listed below:
Entering of annotation data
Generation of metadata
Generation of derivative images
Quality review of images
Pressing of CD ROMs for image loading
The Library staff found it difficult to track images throughout this flow and to know precisely which images were in any step in the process at any given time. Streamlining of the process, consolidation of processes on individual pieces of hardware, and automation of the workflow could assist with making in-process image management more manageable.
7.1.4 Scanning of Oversized Documents
The IBM T. J. Watson Lab gained additional expertise in the technology of scanning and the Pro/3000 Scanner by watching the scanner operators at work and by listening to the requirements of the Library. Part of the FTP collection consisted of large posters or stage drawings, approximately 30" by 45". The posters had deteriorated and were extremely fragile. The Conservation Department had re-assembled them in Mylar folders and hoped that the Pro/3000 Scanner would be able to scan them through the Mylar. Due to the mechanical limitation on the table top of 36", the poster could not be scanned in landscape mode as preferred. Instead, IBM recommended scanning the oversized documents in quadrants. Once this was done and digitized images produced, IBM took the images back to Research for an experiment in stitching the images back together. Also, realizing that the best way to scan was in two passes (24" by 30" each), IBM Research designed a set of lamp arms that kept the lights in their present position, while extending the table top by 36" (18" on each side).
7.1.5 Book Easel Upgrade
The second work item was to upgrade the easel with a set of mechanical slides to provide approximately four inches front to back motion on the table top. This design change was incorporated into the scanner after observing how the scanner operators were struggling to scan smaller books with the camera up at its maximum position. By allowing the easel to move front to back, smaller books, like the ones in the Federal Theatre project, could be scanned with the camera at a lower, more comfortable height. Scanning at this lower optical position produced higher resolution and scanning across the valley of the bindings was possible, if needed, to capture notes close to the spine.
7.1.6 Photographic Negatives
Many of the negatives to be captured had deteriorated, with folds (channels). Often, the film and the cellophane cover were corroded. A viewer would see warped lines. These imperfections did not inhibit the scanning and usable images emerged from the process. The scanner has an invert function for negative and the resulting image is presented to researchers as a positive.
7.2 Storage Issues
7.2.1 Workflow Process
A complete workflow process should be set up from the beginning of a project, including a method to track and load images and data, and including quality review procedure as an integrated part of the workflow process. Appropriate software and hardware must be established and tested. Process must be documented well enough so that staff can easily be trained.
7.2.2 Flexible Image Processing Parameters
A system must provide flexibility in options and parameters for inputs from different sources, and with different physical characteristics , e.g., text vs. graphics images. Standards should be developed for keeping data about the source and processing, e.g., scanner type.
7.2.3 Expansion of Object Type
The Library has begun exploring the expansion of the types of objects kept in "image" databases or repositories. For example, it may be that images of text pages are treated very differently than pictorial images. In addition, it may be that text documents (complete with format codes and characteristics) may be stored as objects like images. A system must allow for defining these object types.
7.2.4 Storage Design and Management Guidance
A system needs to provide guidance on different database fields and storage rules that may be required for different types and sizes of images. Some of the issues include:
Which images need to be available for on-line viewing?
How does the on-line presentation of the image affect the database information and storage, e.g., a large group of text images, images that constitute "pages" of a large document, sequential images in a group, images that constitute a larger image (such as pieces of a map or poster).
What is the appropriate size and type image for archival and preservation purposes?
What database management tools are needed to manage images with different characteristics?
7.3.1 Range of Descriptive Data
Collections vary in the amount of data available per item, or per group of items, and in the amount of data that will be entered initially. The system must allow for flexibility in both initial entry and later enhancement of data for each item and for groups of items. The Library is building a standard base of minimal data elements for items and aggregates, and any system implemented must be able to incorporate these as well as elements specific to a collection.
7.3.2 Links to Existing Data Sources
Some collections have MARC records or other descriptive data sources, and the system must be able to link to these without duplicating data entry. The Library has been working with Corporation for National Research Initiatives (CNRI) and other organizations on global and local standards for digital library identifiers or handles, and any digital library project must be able to fit into these standards.
7.3.3 Intellectual Property Rights
Copyright and related terms and conditions need to be addressed as part of the basic data collected, possibly as low as the individual document level. The FTP collection has nuances that the Library's Copyright Office is working on related projects to protect rights, but in the meantime, digital library projects like FTP need to be able to start with base level assumptions that guide the projects at the most conservative level.
7.3.4 Links to Cooperative Standards
The system must accommodate data from ongoing standards efforts such as inter-institution committees on access methods, such as Z39.50.
Interoperability with existing or future database or repository products is required. The project's data and database must be flexible so that it can be loaded and/or converted to one of several database or repository products. In addition, the database or repository must be accessible to all retrieval methods in use. For example, the Library expects to use WWW access with standard retrieval tools, e.g., full-text search, browsing, and structured retrieval, across multiple database products such as Oracle, other SQL databases and planned repositories from CNRI and others.
7.4 Retrieval Issues
7.4.1 Links to Existing Retrieval Methods
The Library has existing retrieval and access methods - consisting of both engines and programming tools, as well as the "look and feel" of what is presented to the user. Any project must be able to fit into these methods as they exist today, and as they change and evolve. It is important to establish a team responsible for integrating such projects into the NDLP project process.
7.4.2 Data for Either Search or Browse
It is important that NDLP projects can proceed with a range of data for each digitized image: ranging from very minimal metadata to full MARC item-level record cataloging. It must be possible to browse a digital collection in a way as browsing a physical box of the same collection, simply using the equivalent of "next" and "previous".
7.4.3 Flexibility and Expansion
The data should be planned for obvious links to multiple and changing retrieval options.
8. System Strengths
The IBM Digital Library product provides the Library with a commercially-available, "shrink-wrapped" data and image management product based on standard DBMS (DB2 or Oracle), and on the Library's standard hardware platform (AIX).
The IBM expertise and support from the T. J. Watson Research Lab related to the IBM Pro/3000 Scanner has been excellent; they have been very helpful in assisting the Library in making the best use of the unique characteristics of the scanner. The Library expects to expand use of the scanner for other appropriate projects with fragile and special Library materials.
9. Model for NDL Projects
As the FTP digital library project evolved, it was envisioned that the experience and documentation gained and produced could be used as a model for other digital library projects within the Library. The project has indeed joined other Library of Congress developmental efforts in influencing planning for work in the Geography and Map and Music Divisions, and elsewhere.
10. FTP Components with Other NDL Projects
As much as possible, the Library plans to use components of the system for other projects. For example, the Library expects to use the IBM Pro/3000 scanner for the Treasures of the Library exhibit and in other situations where this type of high resolution scanner is required. If the input-loading process can be enhanced/streamlined, and more closely integrated into the overall system, the Library would also like to incorporate this into other NDLP projects.