Part 2 out of 4



printed volume. Given the current budget crunch in educational systems
and the corresponding constraints on librarians in smaller institutions
who wish to add these volumes to their collections, producing the
documents on CD-ROM would likely open a greatly expanded audience for the
papers. TWOHIG stressed, however, that development of the Founding
Fathers CD-ROM is still in its infancy. Serious software problems remain
to be resolved before the material can be put into readable form.

Funding from the Packard Foundation resulted in a major push to
transcribe the 75,000 or so documents of the Washington papers remaining
to be transcribed onto computer disks. Slides illustrated several of the
problems encountered, for example, the present inability of CD-ROM to
indicate the cross-outs (deleted material) in eighteenth century
documents. TWOHIG next described documents from various periods in the
eighteenth century that have been transcribed in chronological order and
delivered to the Packard offices in California, where they are converted
to the CD-ROM, a process that is expected to consume five years to
complete (that is, reckoning from David Packard's suggestion made several
years ago, until about July 1994). TWOHIG found an encouraging
indication of the project's benefits in the ongoing use made by scholars
of the search functions of the CD-ROM, particularly in reducing the time
spent in manually turning the pages of the Washington papers.

TWOHIG next furnished details concerning the accuracy of transcriptions.
For instance, the insertion of thousands of documents on the CD-ROM
currently does not permit each document to be verified against the
original manuscript several times as in the case of documents that appear
in the published edition. However, the transcriptions receive a cursory
check for obvious typos, the misspellings of proper names, and other
errors from the WPP CD-ROM editor. Eventually, all documents that appear
in the electronic version will be checked by project editors. Although
this process has met with opposition from some of the editors on the
grounds that imperfect work may leave their offices, the advantages in
making this material available as a research tool outweigh fears about the
misspelling of proper names and other relatively minor editorial matters.

Completion of all five Founding Fathers projects (i.e., retrievability
and searchability of all of the documents by proper names, alternate
spellings, or varieties of subjects) will provide one of the richest
sources of this size for the history of the United States in the latter
part of the eighteenth century. Further, publication on CD-ROM will
allow editors to include even minutiae, such as laundry lists, not
included in the printed volumes.

It seems possible that the extensive annotation provided in the printed
volumes eventually will be added to the CD-ROM edition, pending
negotiations with the publishers of the papers. At the moment, the
Founding Fathers CD-ROM is accessible only on the IBYCUS, a computer
developed out of the Thesaurus Linguae Graecae project and designed for
the use of classical scholars. There are perhaps 400 IBYCUS computers in
the country, most of which are in university classics departments.
Ultimately, it is anticipated that the CD-ROM edition of the Founding
Fathers documents will run on any IBM-compatible or Macintosh computer
with a CD-ROM drive. Numerous changes in the software will also occur
before the project is completed. (Editor's note: an IBYCUS was
unavailable to demonstrate the CD-ROM.)

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DISCUSSION * Several additional features of WPP clarified *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Discussion following TWOHIG's presentation served to clarify several
additional features, including (1) that the project's primary
intellectual product consists in the electronic transcription of the
material; (2) that the text transmitted to the CD-ROM people is not
marked up; (3) that cataloging and subject-indexing of the material
remain to be worked out (though at this point material can be retrieved
by name); and (4) that because all the searching is done in the hardware,
the IBYCUS is designed to read a CD-ROM which contains only sequential
text files. Technically, it then becomes very easy to read the material
off and put it on another device.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
LEBRON * Overview of the history of the joint project between AAAS and
OCLC * Several practices the on-line environment shares with traditional
publishing on hard copy * Several technical and behavioral barriers to
electronic publishing * How AAAS and OCLC arrived at the subject of
clinical trials * Advantages of the electronic format and other features
of OJCCT * An illustrated tour of the journal *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Maria LEBRON, managing editor, The Online Journal of Current Clinical
Trials (OJCCT), presented an illustrated overview of the history of the
joint project between the American Association for the Advancement of
Science (AAAS) and the Online Computer Library Center, Inc. (OCLC). The
joint venture between AAAS and OCLC owes its beginning to a
reorganization launched by the new chief executive officer at OCLC about
three years ago and combines the strengths of these two disparate
organizations. In short, OJCCT represents the process of scholarly
publishing on line.

LEBRON next discussed several practices the on-line environment shares
with traditional publishing on hard copy--for example, peer review of
manuscripts--that are highly important in the academic world. LEBRON
noted in particular the implications of citation counts for tenure
committees and grants committees. In the traditional hard-copy
environment, citation counts are readily demonstrable, whereas the
on-line environment represents an ethereal medium to most academics.

LEBRON remarked several technical and behavioral barriers to electronic
publishing, for instance, the problems in transmission created by special
characters or by complex graphics and halftones. In addition, she noted
economic limitations such as the storage costs of maintaining back issues
and market or audience education.

Manuscripts cannot be uploaded to OJCCT, LEBRON explained, because it is
not a bulletin board or E-mail, forms of electronic transmission of
information that have created an ambience clouding people's understanding
of what the journal is attempting to do. OJCCT, which publishes
peer-reviewed medical articles dealing with the subject of clinical
trials, includes text, tabular material, and graphics, although at this
time it can transmit only line illustrations.

Next, LEBRON described how AAAS and OCLC arrived at the subject of
clinical trials: It is 1) a highly statistical discipline that 2) does
not require halftones but can satisfy the needs of its audience with line
illustrations and graphic material, and 3) there is a need for the speedy
dissemination of high-quality research results. Clinical trials are
research activities that involve the administration of a test treatment
to some experimental unit in order to test its usefulness before it is
made available to the general population. LEBRON proceeded to give
additional information on OJCCT concerning its editor-in-chief, editorial
board, editorial content, and the types of articles it publishes
(including peer-reviewed research reports and reviews), as well as
features shared by other traditional hard-copy journals.

Among the advantages of the electronic format are faster dissemination of
information, including raw data, and the absence of space constraints
because pages do not exist. (This latter fact creates an interesting
situation when it comes to citations.) Nor are there any issues. AAAS's
capacity to download materials directly from the journal to a
subscriber's printer, hard drive, or floppy disk helps ensure highly
accurate transcription. Other features of OJCCT include on-screen alerts
that allow linkage of subsequently published documents to the original
documents; on-line searching by subject, author, title, etc.; indexing of
every single word that appears in an article; viewing access to an
article by component (abstract, full text, or graphs); numbered
paragraphs to replace page counts; publication in Science every thirty
days of indexing of all articles published in the journal;
typeset-quality screens; and Hypertext links that enable subscribers to
bring up Medline abstracts directly without leaving the journal.

After detailing the two primary ways to gain access to the journal,
through the OCLC network and Compuserv if one desires graphics or through
the Internet if just an ASCII file is desired, LEBRON illustrated the
speedy editorial process and the coding of the document using SGML tags
after it has been accepted for publication. She also gave an illustrated
tour of the journal, its search-and-retrieval capabilities in particular,
but also including problems associated with scanning in illustrations,
and the importance of on-screen alerts to the medical profession re
retractions or corrections, or more frequently, editorials, letters to
the editors, or follow-up reports. She closed by inviting the audience
to join AAAS on 1 July, when OJCCT was scheduled to go on-line.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DISCUSSION * Additional features of OJCCT *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

In the lengthy discussion that followed LEBRON's presentation, these
points emerged:

* The SGML text can be tailored as users wish.

* All these articles have a fairly simple document definition.

* Document-type definitions (DTDs) were developed and given to OJCCT
for coding.

* No articles will be removed from the journal. (Because there are
no back issues, there are no lost issues either. Once a subscriber
logs onto the journal he or she has access not only to the currently
published materials, but retrospectively to everything that has been
published in it. Thus the table of contents grows bigger. The date
of publication serves to distinguish between currently published
materials and older materials.)

* The pricing system for the journal resembles that for most medical
journals: for 1992, $95 for a year, plus telecommunications charges
(there are no connect time charges); for 1993, $110 for the
entire year for single users, though the journal can be put on a
local area network (LAN). However, only one person can access the
journal at a time. Site licenses may come in the future.

* AAAS is working closely with colleagues at OCLC to display
mathematical equations on screen.

* Without compromising any steps in the editorial process, the
technology has reduced the time lag between when a manuscript is
originally submitted and the time it is accepted; the review process
does not differ greatly from the standard six-to-eight weeks
employed by many of the hard-copy journals. The process still
depends on people.

* As far as a preservation copy is concerned, articles will be
maintained on the computer permanently and subscribers, as part of
their subscription, will receive a microfiche-quality archival copy
of everything published during that year; in addition, reprints can
be purchased in much the same way as in a hard-copy environment.
Hard copies are prepared but are not the primary medium for the
dissemination of the information.

* Because OJCCT is not yet on line, it is difficult to know how many
people would simply browse through the journal on the screen as
opposed to downloading the whole thing and printing it out; a mix of
both types of users likely will result.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
PERSONIUS * Developments in technology over the past decade * The CLASS
Project * Advantages for technology and for the CLASS Project *
Developing a network application an underlying assumption of the project
* Details of the scanning process * Print-on-demand copies of books *
Future plans include development of a browsing tool *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Lynne PERSONIUS, assistant director, Cornell Information Technologies for
Scholarly Information Services, Cornell University, first commented on
the tremendous impact that developments in technology over the past ten
years--networking, in particular--have had on the way information is
handled, and how, in her own case, these developments have counterbalanced
Cornell's relative geographical isolation. Other significant technologies
include scanners, which are much more sophisticated than they were ten years
ago; mass storage and the dramatic savings that result from it in terms of
both space and money relative to twenty or thirty years ago; new and
improved printing technologies, which have greatly affected the distribution
of information; and, of course, digital technologies, whose applicability to
library preservation remains at issue.

Given that context, PERSONIUS described the College Library Access and
Storage System (CLASS) Project, a library preservation project,
primarily, and what has been accomplished. Directly funded by the
Commission on Preservation and Access and by the Xerox Corporation, which
has provided a significant amount of hardware, the CLASS Project has been
working with a development team at Xerox to develop a software
application tailored to library preservation requirements. Within
Cornell, participants in the project have been working jointly with both
library and information technologies. The focus of the project has been
on reformatting and saving books that are in brittle condition.
PERSONIUS showed Workshop participants a brittle book, and described how
such books were the result of developments in papermaking around the
beginning of the Industrial Revolution. The papermaking process was
changed so that a significant amount of acid was introduced into the
actual paper itself, which deteriorates as it sits on library shelves.

One of the advantages for technology and for the CLASS Project is that
the information in brittle books is mostly out of copyright and thus
offers an opportunity to work with material that requires library
preservation, and to create and work on an infrastructure to save the
material. Acknowledging the familiarity of those working in preservation
with this information, PERSONIUS noted that several things are being
done: the primary preservation technology used today is photocopying of
brittle material. Saving the intellectual content of the material is the
main goal. With microfilm copy, the intellectual content is preserved on
the assumption that in the future the image can be reformatted in any
other way that then exists.

An underlying assumption of the CLASS Project from the beginning was
that it would develop a network application. Project staff scan books
at a workstation located in the library, near the brittle material.
An image-server filing system is located at a distance from that
workstation, and a printer is located in another building. All of the
materials digitized and stored on the image-filing system are cataloged
in the on-line catalogue. In fact, a record for each of these electronic
books is stored in the RLIN database so that a record exists of what is
in the digital library throughout standard catalogue procedures. In the
future, researchers working from their own workstations in their offices,
or their networks, will have access--wherever they might be--through a
request server being built into the new digital library. A second
assumption is that the preferred means of finding the material will be by
looking through a catalogue. PERSONIUS described the scanning process,
which uses a prototype scanner being developed by Xerox and which scans a
very high resolution image at great speed. Another significant feature,
because this is a preservation application, is the placing of the pages
that fall apart one for one on the platen. Ordinarily, a scanner could
be used with some sort of a document feeder, but because of this
application that is not feasible. Further, because CLASS is a
preservation application, after the paper replacement is made there, a
very careful quality control check is performed. An original book is
compared to the printed copy and verification is made, before proceeding,
that all of the image, all of the information, has been captured. Then,
a new library book is produced: The printed images are rebound by a
commercial binder and a new book is returned to the shelf.
Significantly, the books returned to the library shelves are beautiful
and useful replacements on acid-free paper that should last a long time,
in effect, the equivalent of preservation photocopies. Thus, the project
has a library of digital books. In essence, CLASS is scanning and
storing books as 600 dot-per-inch bit-mapped images, compressed using
Group 4 CCITT (i.e., the French acronym for International Consultative
Committee for Telegraph and Telephone) compression. They are stored as
TIFF files on an optical filing system that is composed of a database
used for searching and locating the books and an optical jukebox that
stores 64 twelve-inch platters. A very-high-resolution printed copy of
these books at 600 dots per inch is created, using a Xerox DocuTech
printer to make the paper replacements on acid-free paper.

PERSONIUS maintained that the CLASS Project presents an opportunity to
introduce people to books as digital images by using a paper medium.
Books are returned to the shelves while people are also given the ability
to print on demand--to make their own copies of books. (PERSONIUS
distributed copies of an engineering journal published by engineering
students at Cornell around 1900 as an example of what a print-on-demand
copy of material might be like. This very cheap copy would be available
to people to use for their own research purposes and would bridge the gap
between an electronic work and the paper that readers like to have.)
PERSONIUS then attempted to illustrate a very early prototype of
networked access to this digital library. Xerox Corporation has
developed a prototype of a view station that can send images across the
network to be viewed.

The particular library brought down for demonstration contained two
mathematics books. CLASS is developing and will spend the next year
developing an application that allows people at workstations to browse
the books. Thus, CLASS is developing a browsing tool, on the assumption
that users do not want to read an entire book from a workstation, but
would prefer to be able to look through and decide if they would like to
have a printed copy of it.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DISCUSSION * Re retrieval software * "Digital file copyright" * Scanning
rate during production * Autosegmentation * Criteria employed in
selecting books for scanning * Compression and decompression of images *
OCR not precluded *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

During the question-and-answer period that followed her presentation,
PERSONIUS made these additional points:

* Re retrieval software, Cornell is developing a Unix-based server
as well as clients for the server that support multiple platforms
(Macintosh, IBM and Sun workstations), in the hope that people from
any of those platforms will retrieve books; a further operating
assumption is that standard interfaces will be used as much as
possible, where standards can be put in place, because CLASS
considers this retrieval software a library application and would
like to be able to look at material not only at Cornell but at other
institutions.

* The phrase "digital file copyright by Cornell University" was
added at the advice of Cornell's legal staff with the caveat that it
probably would not hold up in court. Cornell does not want people
to copy its books and sell them but would like to keep them
available for use in a library environment for library purposes.

* In production the scanner can scan about 300 pages per hour,
capturing 600 dots per inch.

* The Xerox software has filters to scan halftone material and avoid
the moire patterns that occur when halftone material is scanned.
Xerox has been working on hardware and software that would enable
the scanner itself to recognize this situation and deal with it
appropriately--a kind of autosegmentation that would enable the
scanner to handle halftone material as well as text on a single page.

* The books subjected to the elaborate process described above were
selected because CLASS is a preservation project, with the first 500
books selected coming from Cornell's mathematics collection, because
they were still being heavily used and because, although they were
in need of preservation, the mathematics library and the mathematics
faculty were uncomfortable having them microfilmed. (They wanted a
printed copy.) Thus, these books became a logical choice for this
project. Other books were chosen by the project's selection committees
for experiments with the technology, as well as to meet a demand or need.

* Images will be decompressed before they are sent over the line; at
this time they are compressed and sent to the image filing system
and then sent to the printer as compressed images; they are returned
to the workstation as compressed 600-dpi images and the workstation
decompresses and scales them for display--an inefficient way to
access the material though it works quite well for printing and
other purposes.

* CLASS is also decompressing on Macintosh and IBM, a slow process
right now. Eventually, compression and decompression will take
place on an image conversion server. Trade-offs will be made, based
on future performance testing, concerning where the file is
compressed and what resolution image is sent.

* OCR has not been precluded; images are being stored that have been
scanned at a high resolution, which presumably would suit them well
to an OCR process. Because the material being scanned is about 100
years old and was printed with less-than-ideal technologies, very
early and preliminary tests have not produced good results. But the
project is capturing an image that is of sufficient resolution to be
subjected to OCR in the future. Moreover, the system architecture
and the system plan have a logical place to store an OCR image if it
has been captured. But that is not being done now.

******

SESSION III. DISTRIBUTION, NETWORKS, AND NETWORKING: OPTIONS FOR
DISSEMINATION

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ZICH * Issues pertaining to CD-ROMs * Options for publishing in CD-ROM *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Robert ZICH, special assistant to the associate librarian for special
projects, Library of Congress, and moderator of this session, first noted
the blessed but somewhat awkward circumstance of having four very
distinguished people representing networks and networking or at least
leaning in that direction, while lacking anyone to speak from the
strongest possible background in CD-ROMs. ZICH expressed the hope that
members of the audience would join the discussion. He stressed the
subtitle of this particular session, "Options for Dissemination," and,
concerning CD-ROMs, the importance of determining when it would be wise
to consider dissemination in CD-ROM versus networks. A shopping list of
issues pertaining to CD-ROMs included: the grounds for selecting
commercial publishers, and in-house publication where possible versus
nonprofit or government publication. A similar list for networks
included: determining when one should consider dissemination through a
network, identifying the mechanisms or entities that exist to place items
on networks, identifying the pool of existing networks, determining how a
producer would choose between networks, and identifying the elements of
a business arrangement in a network.

Options for publishing in CD-ROM: an outside publisher versus
self-publication. If an outside publisher is used, it can be nonprofit,
such as the Government Printing Office (GPO) or the National Technical
Information Service (NTIS), in the case of government. The pros and cons
associated with employing an outside publisher are obvious. Among the
pros, there is no trouble getting accepted. One pays the bill and, in
effect, goes one's way. Among the cons, when one pays an outside
publisher to perform the work, that publisher will perform the work it is
obliged to do, but perhaps without the production expertise and skill in
marketing and dissemination that some would seek. There is the body of
commercial publishers that do possess that kind of expertise in
distribution and marketing but that obviously are selective. In
self-publication, one exercises full control, but then one must handle
matters such as distribution and marketing. Such are some of the options
for publishing in the case of CD-ROM.

In the case of technical and design issues, which are also important,
there are many matters which many at the Workshop already knew a good
deal about: retrieval system requirements and costs, what to do about
images, the various capabilities and platforms, the trade-offs between
cost and performance, concerns about local-area networkability,
interoperability, etc.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
LYNCH * Creating networked information is different from using networks
as an access or dissemination vehicle * Networked multimedia on a large
scale does not yet work * Typical CD-ROM publication model a two-edged
sword * Publishing information on a CD-ROM in the present world of
immature standards * Contrast between CD-ROM and network pricing *
Examples demonstrated earlier in the day as a set of insular information
gems * Paramount need to link databases * Layering to become increasingly
necessary * Project NEEDS and the issues of information reuse and active
versus passive use * X-Windows as a way of differentiating between
network access and networked information * Barriers to the distribution
of networked multimedia information * Need for good, real-time delivery
protocols * The question of presentation integrity in client-server
computing in the academic world * Recommendations for producing multimedia
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Clifford LYNCH, director, Library Automation, University of California,
opened his talk with the general observation that networked information
constituted a difficult and elusive topic because it is something just
starting to develop and not yet fully understood. LYNCH contended that
creating genuinely networked information was different from using
networks as an access or dissemination vehicle and was more sophisticated
and more subtle. He invited the members of the audience to extrapolate,
from what they heard about the preceding demonstration projects, to what
sort of a world of electronics information--scholarly, archival,
cultural, etc.--they wished to end up with ten or fifteen years from now.
LYNCH suggested that to extrapolate directly from these projects would
produce unpleasant results.

Putting the issue of CD-ROM in perspective before getting into
generalities on networked information, LYNCH observed that those engaged
in multimedia today who wish to ship a product, so to say, probably do
not have much choice except to use CD-ROM: networked multimedia on a
large scale basically does not yet work because the technology does not
exist. For example, anybody who has tried moving images around over the
Internet knows that this is an exciting touch-and-go process, a
fascinating and fertile area for experimentation, research, and
development, but not something that one can become deeply enthusiastic
about committing to production systems at this time.

This situation will change, LYNCH said. He differentiated CD-ROM from
the practices that have been followed up to now in distributing data on
CD-ROM. For LYNCH the problem with CD-ROM is not its portability or its
slowness but the two-edged sword of having the retrieval application and
the user interface inextricably bound up with the data, which is the
typical CD-ROM publication model. It is not a case of publishing data
but of distributing a typically stand-alone, typically closed system,
all--software, user interface, and data--on a little disk. Hence, all
the between-disk navigational issues as well as the impossibility in most
cases of integrating data on one disk with that on another. Most CD-ROM
retrieval software does not network very gracefully at present. However,
in the present world of immature standards and lack of understanding of
what network information is or what the ground rules are for creating or
using it, publishing information on a CD-ROM does add value in a very
real sense.

LYNCH drew a contrast between CD-ROM and network pricing and in doing so
highlighted something bizarre in information pricing. A large
institution such as the University of California has vendors who will
offer to sell information on CD-ROM for a price per year in four digits,
but for the same data (e.g., an abstracting and indexing database) on
magnetic tape, regardless of how many people may use it concurrently,
will quote a price in six digits.

What is packaged with the CD-ROM in one sense adds value--a complete
access system, not just raw, unrefined information--although it is not
generally perceived that way. This is because the access software,
although it adds value, is viewed by some people, particularly in the
university environment where there is a very heavy commitment to
networking, as being developed in the wrong direction.

Given that context, LYNCH described the examples demonstrated as a set of
insular information gems--Perseus, for example, offers nicely linked
information, but would be very difficult to integrate with other
databases, that is, to link together seamlessly with other source files
from other sources. It resembles an island, and in this respect is
similar to numerous stand-alone projects that are based on videodiscs,
that is, on the single-workstation concept.

As scholarship evolves in a network environment, the paramount need will
be to link databases. We must link personal databases to public
databases, to group databases, in fairly seamless ways--which is
extremely difficult in the environments under discussion with copies of
databases proliferating all over the place.

The notion of layering also struck LYNCH as lurking in several of the
projects demonstrated. Several databases in a sense constitute
information archives without a significant amount of navigation built in.
Educators, critics, and others will want a layered structure--one that
defines or links paths through the layers to allow users to reach
specific points. In LYNCH's view, layering will become increasingly
necessary, and not just within a single resource but across resources
(e.g., tracing mythology and cultural themes across several classics
databases as well as a database of Renaissance culture). This ability to
organize resources, to build things out of multiple other things on the
network or select pieces of it, represented for LYNCH one of the key
aspects of network information.

Contending that information reuse constituted another significant issue,
LYNCH commended to the audience's attention Project NEEDS (i.e., National
Engineering Education Delivery System). This project's objective is to
produce a database of engineering courseware as well as the components
that can be used to develop new courseware. In a number of the existing
applications, LYNCH said, the issue of reuse (how much one can take apart
and reuse in other applications) was not being well considered. He also
raised the issue of active versus passive use, one aspect of which is
how much information will be manipulated locally by users. Most people,
he argued, may do a little browsing and then will wish to print. LYNCH
was uncertain how these resources would be used by the vast majority of
users in the network environment.

LYNCH next said a few words about X-Windows as a way of differentiating
between network access and networked information. A number of the
applications demonstrated at the Workshop could be rewritten to use X
across the network, so that one could run them from any X-capable device-
-a workstation, an X terminal--and transact with a database across the
network. Although this opens up access a little, assuming one has enough
network to handle it, it does not provide an interface to develop a
program that conveniently integrates information from multiple databases.
X is a viewing technology that has limits. In a real sense, it is just a
graphical version of remote log-in across the network. X-type applications
represent only one step in the progression towards real access.

LYNCH next discussed barriers to the distribution of networked multimedia
information. The heart of the problem is a lack of standards to provide
the ability for computers to talk to each other, retrieve information,
and shuffle it around fairly casually. At the moment, little progress is
being made on standards for networked information; for example, present
standards do not cover images, digital voice, and digital video. A
useful tool kit of exchange formats for basic texts is only now being
assembled. The synchronization of content streams (i.e., synchronizing a
voice track to a video track, establishing temporal relations between
different components in a multimedia object) constitutes another issue
for networked multimedia that is just beginning to receive attention.

Underlying network protocols also need some work; good, real-time
delivery protocols on the Internet do not yet exist. In LYNCH's view,
highly important in this context is the notion of networked digital
object IDs, the ability of one object on the network to point to another
object (or component thereof) on the network. Serious bandwidth issues
also exist. LYNCH was uncertain if billion-bit-per-second networks would
prove sufficient if numerous people ran video in parallel.

LYNCH concluded by offering an issue for database creators to consider,
as well as several comments about what might constitute good trial
multimedia experiments. In a networked information world the database
builder or service builder (publisher) does not exercise the same
extensive control over the integrity of the presentation; strange
programs "munge" with one's data before the user sees it. Serious
thought must be given to what guarantees integrity of presentation. Part
of that is related to where one draws the boundaries around a networked
information service. This question of presentation integrity in
client-server computing has not been stressed enough in the academic
world, LYNCH argued, though commercial service providers deal with it
regularly.

Concerning multimedia, LYNCH observed that good multimedia at the moment
is hideously expensive to produce. He recommended producing multimedia
with either very high sale value, or multimedia with a very long life
span, or multimedia that will have a very broad usage base and whose
costs therefore can be amortized among large numbers of users. In this
connection, historical and humanistically oriented material may be a good
place to start, because it tends to have a longer life span than much of
the scientific material, as well as a wider user base. LYNCH noted, for
example, that American Memory fits many of the criteria outlined. He
remarked the extensive discussion about bringing the Internet or the
National Research and Education Network (NREN) into the K-12 environment
as a way of helping the American educational system.

LYNCH closed by noting that the kinds of applications demonstrated struck
him as excellent justifications of broad-scale networking for K-12, but
that at this time no "killer" application exists to mobilize the K-12
community to obtain connectivity.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DISCUSSION * Dearth of genuinely interesting applications on the network
a slow-changing situation * The issue of the integrity of presentation in
a networked environment * Several reasons why CD-ROM software does not
network *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

During the discussion period that followed LYNCH's presentation, several
additional points were made.

LYNCH reiterated even more strongly his contention that, historically,
once one goes outside high-end science and the group of those who need
access to supercomputers, there is a great dearth of genuinely
interesting applications on the network. He saw this situation changing
slowly, with some of the scientific databases and scholarly discussion
groups and electronic journals coming on as well as with the availability
of Wide Area Information Servers (WAIS) and some of the databases that
are being mounted there. However, many of those things do not seem to
have piqued great popular interest. For instance, most high school
students of LYNCH's acquaintance would not qualify as devotees of serious
molecular biology.

Concerning the issue of the integrity of presentation, LYNCH believed
that a couple of information providers have laid down the law at least on
certain things. For example, his recollection was that the National
Library of Medicine feels strongly that one needs to employ the
identifier field if he or she is to mount a database commercially. The
problem with a real networked environment is that one does not know who
is reformatting and reprocessing one's data when one enters a client
server mode. It becomes anybody's guess, for example, if the network
uses a Z39.50 server, or what clients are doing with one's data. A data
provider can say that his contract will only permit clients to have
access to his data after he vets them and their presentation and makes
certain it suits him. But LYNCH held out little expectation that the
network marketplace would evolve in that way, because it required too
much prior negotiation.

CD-ROM software does not network for a variety of reasons, LYNCH said.
He speculated that CD-ROM publishers are not eager to have their products
really hook into wide area networks, because they fear it will make their
data suppliers nervous. Moreover, until relatively recently, one had to
be rather adroit to run a full TCP/IP stack plus applications on a
PC-size machine, whereas nowadays it is becoming easier as PCs grow
bigger and faster. LYNCH also speculated that software providers had not
heard from their customers until the last year or so, or had not heard
from enough of their customers.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
BESSER * Implications of disseminating images on the network; planning
the distribution of multimedia documents poses two critical
implementation problems * Layered approach represents the way to deal
with users' capabilities * Problems in platform design; file size and its
implications for networking * Transmission of megabyte size images
impractical * Compression and decompression at the user's end * Promising
trends for compression * A disadvantage of using X-Windows * A project at
the Smithsonian that mounts images on several networks *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Howard BESSER, School of Library and Information Science, University of
Pittsburgh, spoke primarily about multimedia, focusing on images and the
broad implications of disseminating them on the network. He argued that
planning the distribution of multimedia documents posed two critical
implementation problems, which he framed in the form of two questions:
1) What platform will one use and what hardware and software will users
have for viewing of the material? and 2) How can one deliver a
sufficiently robust set of information in an accessible format in a
reasonable amount of time? Depending on whether network or CD-ROM is the
medium used, this question raises different issues of storage,
compression, and transmission.

Concerning the design of platforms (e.g., sound, gray scale, simple
color, etc.) and the various capabilities users may have, BESSER
maintained that a layered approach was the way to deal with users'
capabilities. A result would be that users with less powerful
workstations would simply have less functionality. He urged members of
the audience to advocate standards and accompanying software that handle
layered functionality across a wide variety of platforms.

BESSER also addressed problems in platform design, namely, deciding how
large a machine to design for situations when the largest number of users
have the lowest level of the machine, and one desires higher
functionality. BESSER then proceeded to the question of file size and
its implications for networking. He discussed still images in the main.
For example, a digital color image that fills the screen of a standard
mega-pel workstation (Sun or Next) will require one megabyte of storage
for an eight-bit image or three megabytes of storage for a true color or
twenty-four-bit image. Lossless compression algorithms (that is,
computational procedures in which no data is lost in the process of
compressing [and decompressing] an image--the exact bit-representation is
maintained) might bring storage down to a third of a megabyte per image,
but not much further than that. The question of size makes it difficult
to fit an appropriately sized set of these images on a single disk or to
transmit them quickly enough on a network.

With these full screen mega-pel images that constitute a third of a
megabyte, one gets 1,000-3,000 full-screen images on a one-gigabyte disk;
a standard CD-ROM represents approximately 60 percent of that. Storing
images the size of a PC screen (just 8 bit color) increases storage
capacity to 4,000-12,000 images per gigabyte; 60 percent of that gives
one the size of a CD-ROM, which in turn creates a major problem. One
cannot have full-screen, full-color images with lossless compression; one
must compress them or use a lower resolution. For megabyte-size images,
anything slower than a T-1 speed is impractical. For example, on a
fifty-six-kilobaud line, it takes three minutes to transfer a
one-megabyte file, if it is not compressed; and this speed assumes ideal
circumstances (no other user contending for network bandwidth). Thus,
questions of disk access, remote display, and current telephone
connection speed make transmission of megabyte-size images impractical.

BESSER then discussed ways to deal with these large images, for example,
compression and decompression at the user's end. In this connection, the
issues of how much one is willing to lose in the compression process and
what image quality one needs in the first place are unknown. But what is
known is that compression entails some loss of data. BESSER urged that
more studies be conducted on image quality in different situations, for
example, what kind of images are needed for what kind of disciplines, and
what kind of image quality is needed for a browsing tool, an intermediate
viewing tool, and archiving.

BESSER remarked two promising trends for compression: from a technical
perspective, algorithms that use what is called subjective redundancy
employ principles from visual psycho-physics to identify and remove
information from the image that the human eye cannot perceive; from an
interchange and interoperability perspective, the JPEG (i.e., Joint
Photographic Experts Group, an ISO standard) compression algorithms also
offer promise. These issues of compression and decompression, BESSER
argued, resembled those raised earlier concerning the design of different
platforms. Gauging the capabilities of potential users constitutes a
primary goal. BESSER advocated layering or separating the images from
the applications that retrieve and display them, to avoid tying them to
particular software.

BESSER detailed several lessons learned from his work at Berkeley with
Imagequery, especially the advantages and disadvantages of using
X-Windows. In the latter category, for example, retrieval is tied
directly to one's data, an intolerable situation in the long run on a
networked system. Finally, BESSER described a project of Jim Wallace at
the Smithsonian Institution, who is mounting images in a extremely
rudimentary way on the Compuserv and Genie networks and is preparing to
mount them on America On Line. Although the average user takes over
thirty minutes to download these images (assuming a fairly fast modem),
nevertheless, images have been downloaded 25,000 times.

BESSER concluded his talk with several comments on the business
arrangement between the Smithsonian and Compuserv. He contended that not
enough is known concerning the value of images.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DISCUSSION * Creating digitized photographic collections nearly
impossible except with large organizations like museums * Need for study
to determine quality of images users will tolerate *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

During the brief exchange between LESK and BESSER that followed, several
clarifications emerged.

LESK argued that the photographers were far ahead of BESSER: It is
almost impossible to create such digitized photographic collections
except with large organizations like museums, because all the
photographic agencies have been going crazy about this and will not sign
licensing agreements on any sort of reasonable terms. LESK had heard
that National Geographic, for example, had tried to buy the right to use
some image in some kind of educational production for $100 per image, but
the photographers will not touch it. They want accounting and payment
for each use, which cannot be accomplished within the system. BESSER
responded that a consortium of photographers, headed by a former National
Geographic photographer, had started assembling its own collection of
electronic reproductions of images, with the money going back to the
cooperative.

LESK contended that BESSER was unnecessarily pessimistic about multimedia
images, because people are accustomed to low-quality images, particularly
from video. BESSER urged the launching of a study to determine what
users would tolerate, what they would feel comfortable with, and what
absolutely is the highest quality they would ever need. Conceding that
he had adopted a dire tone in order to arouse people about the issue,
BESSER closed on a sanguine note by saying that he would not be in this
business if he did not think that things could be accomplished.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
LARSEN * Issues of scalability and modularity * Geometric growth of the
Internet and the role played by layering * Basic functions sustaining
this growth * A library's roles and functions in a network environment *
Effects of implementation of the Z39.50 protocol for information
retrieval on the library system * The trade-off between volumes of data
and its potential usage * A snapshot of current trends *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Ronald LARSEN, associate director for information technology, University
of Maryland at College Park, first addressed the issues of scalability
and modularity. He noted the difficulty of anticipating the effects of
orders-of-magnitude growth, reflecting on the twenty years of experience
with the Arpanet and Internet. Recalling the day's demonstrations of
CD-ROM and optical disk material, he went on to ask if the field has yet
learned how to scale new systems to enable delivery and dissemination
across large-scale networks.

LARSEN focused on the geometric growth of the Internet from its inception
circa 1969 to the present, and the adjustments required to respond to
that rapid growth. To illustrate the issue of scalability, LARSEN
considered computer networks as including three generic components:
computers, network communication nodes, and communication media. Each
component scales (e.g., computers range from PCs to supercomputers;
network nodes scale from interface cards in a PC through sophisticated
routers and gateways; and communication media range from 2,400-baud
dial-up facilities through 4.5-Mbps backbone links, and eventually to
multigigabit-per-second communication lines), and architecturally, the
components are organized to scale hierarchically from local area networks
to international-scale networks. Such growth is made possible by
building layers of communication protocols, as BESSER pointed out.
By layering both physically and logically, a sense of scalability is
maintained from local area networks in offices, across campuses, through
bridges, routers, campus backbones, fiber-optic links, etc., up into
regional networks and ultimately into national and international
networks.

LARSEN then illustrated the geometric growth over a two-year period--
through September 1991--of the number of networks that comprise the
Internet. This growth has been sustained largely by the availability of
three basic functions: electronic mail, file transfer (ftp), and remote
log-on (telnet). LARSEN also reviewed the growth in the kind of traffic
that occurs on the network. Network traffic reflects the joint contributions
of a larger population of users and increasing use per user. Today one sees
serious applications involving moving images across the network--a rarity
ten years ago. LARSEN recalled and concurred with BESSER's main point
that the interesting problems occur at the application level.

LARSEN then illustrated a model of a library's roles and functions in a
network environment. He noted, in particular, the placement of on-line
catalogues onto the network and patrons obtaining access to the library
increasingly through local networks, campus networks, and the Internet.
LARSEN supported LYNCH's earlier suggestion that we need to address
fundamental questions of networked information in order to build
environments that scale in the information sense as well as in the
physical sense.

LARSEN supported the role of the library system as the access point into
the nation's electronic collections. Implementation of the Z39.50
protocol for information retrieval would make such access practical and
feasible. For example, this would enable patrons in Maryland to search
California libraries, or other libraries around the world that are
conformant with Z39.50 in a manner that is familiar to University of
Maryland patrons. This client-server model also supports moving beyond
secondary content into primary content. (The notion of how one links
from secondary content to primary content, LARSEN said, represents a
fundamental problem that requires rigorous thought.) After noting
numerous network experiments in accessing full-text materials, including
projects supporting the ordering of materials across the network, LARSEN
revisited the issue of transmitting high-density, high-resolution color
images across the network and the large amounts of bandwidth they
require. He went on to address the bandwidth and synchronization
problems inherent in sending full-motion video across the network.

LARSEN illustrated the trade-off between volumes of data in bytes or
orders of magnitude and the potential usage of that data. He discussed
transmission rates (particularly, the time it takes to move various forms
of information), and what one could do with a network supporting
multigigabit-per-second transmission. At the moment, the network
environment includes a composite of data-transmission requirements,
volumes and forms, going from steady to bursty (high-volume) and from
very slow to very fast. This aggregate must be considered in the design,
construction, and operation of multigigabyte networks.

LARSEN's objective is to use the networks and library systems now being
constructed to increase access to resources wherever they exist, and
thus, to evolve toward an on-line electronic virtual library.

LARSEN concluded by offering a snapshot of current trends: continuing
geometric growth in network capacity and number of users; slower
development of applications; and glacial development and adoption of
standards. The challenge is to design and develop each new application
system with network access and scalability in mind.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
BROWNRIGG * Access to the Internet cannot be taken for granted * Packet
radio and the development of MELVYL in 1980-81 in the Division of Library
Automation at the University of California * Design criteria for packet
radio * A demonstration project in San Diego and future plans * Spread
spectrum * Frequencies at which the radios will run and plans to
reimplement the WAIS server software in the public domain * Need for an
infrastructure of radios that do not move around *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Edwin BROWNRIGG, executive director, Memex Research Institute, first
polled the audience in order to seek out regular users of the Internet as
well as those planning to use it some time in the future. With nearly
everybody in the room falling into one category or the other, BROWNRIGG
made a point re access, namely that numerous individuals, especially those
who use the Internet every day, take for granted their access to it, the
speeds with which they are connected, and how well it all works.
However, as BROWNRIGG discovered between 1987 and 1989 in Australia,
if one wants access to the Internet but cannot afford it or has some
physical boundary that prevents her or him from gaining access, it can
be extremely frustrating. He suggested that because of economics and
physical barriers we were beginning to create a world of haves and have-nots
in the process of scholarly communication, even in the United States.

BROWNRIGG detailed the development of MELVYL in academic year 1980-81 in
the Division of Library Automation at the University of California, in
order to underscore the issue of access to the system, which at the
outset was extremely limited. In short, the project needed to build a
network, which at that time entailed use of satellite technology, that is,
putting earth stations on campus and also acquiring some terrestrial links
from the State of California's microwave system. The installation of
satellite links, however, did not solve the problem (which actually
formed part of a larger problem involving politics and financial resources).
For while the project team could get a signal onto a campus, it had no means
of distributing the signal throughout the campus. The solution involved
adopting a recent development in wireless communication called packet radio,
which combined the basic notion of packet-switching with radio. The project
used this technology to get the signal from a point on campus where it
came down, an earth station for example, into the libraries, because it
found that wiring the libraries, especially the older marble buildings,
would cost $2,000-$5,000 per terminal.

BROWNRIGG noted that, ten years ago, the project had neither the public
policy nor the technology that would have allowed it to use packet radio
in any meaningful way. Since then much had changed. He proceeded to
detail research and development of the technology, how it is being
deployed in California, and what direction he thought it would take.
The design criteria are to produce a high-speed, one-time, low-cost,
high-quality, secure, license-free device (packet radio) that one can
plug in and play today, forget about it, and have access to the Internet.
By high speed, BROWNRIGG meant 1 megabyte and 1.5 megabytes. Those units
have been built, he continued, and are in the process of being
type-certified by an independent underwriting laboratory so that they can
be type-licensed by the Federal Communications Commission. As is the
case with citizens band, one will be able to purchase a unit and not have
to worry about applying for a license.

The basic idea, BROWNRIGG elaborated, is to take high-speed radio data
transmission and create a backbone network that at certain strategic
points in the network will "gateway" into a medium-speed packet radio
(i.e., one that runs at 38.4 kilobytes), so that perhaps by 1994-1995
people, like those in the audience for the price of a VCR could purchase
a medium-speed radio for the office or home, have full network connectivity
to the Internet, and partake of all its services, with no need for an FCC
license and no regular bill from the local common carrier. BROWNRIGG
presented several details of a demonstration project currently taking
place in San Diego and described plans, pending funding, to install a
full-bore network in the San Francisco area. This network will have 600
nodes running at backbone speeds, and 100 of these nodes will be libraries,
which in turn will be the gateway ports to the 38.4 kilobyte radios that
will give coverage for the neighborhoods surrounding the libraries.

BROWNRIGG next explained Part 15.247, a new rule within Title 47 of the
Code of Federal Regulations enacted by the FCC in 1985. This rule
challenged the industry, which has only now risen to the occasion, to
build a radio that would run at no more than one watt of output power and
use a fairly exotic method of modulating the radio wave called spread
spectrum. Spread spectrum in fact permits the building of networks so
that numerous data communications can occur simultaneously, without
interfering with each other, within the same wide radio channel.

BROWNRIGG explained that the frequencies at which the radios would run
are very short wave signals. They are well above standard microwave and
radar. With a radio wave that small, one watt becomes a tremendous punch
per bit and thus makes transmission at reasonable speed possible. In
order to minimize the potential for congestion, the project is
undertaking to reimplement software which has been available in the
networking business and is taken for granted now, for example, TCP/IP,
routing algorithms, bridges, and gateways. In addition, the project
plans to take the WAIS server software in the public domain and
reimplement it so that one can have a WAIS server on a Mac instead of a
Unix machine. The Memex Research Institute believes that libraries, in
particular, will want to use the WAIS servers with packet radio. This
project, which has a team of about twelve people, will run through 1993
and will include the 100 libraries already mentioned as well as other
professionals such as those in the medical profession, engineering, and
law. Thus, the need is to create an infrastructure of radios that do not
move around, which, BROWNRIGG hopes, will solve a problem not only for
libraries but for individuals who, by and large today, do not have access
to the Internet from their homes and offices.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DISCUSSION * Project operating frequencies *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

During a brief discussion period, which also concluded the day's
proceedings, BROWNRIGG stated that the project was operating in four
frequencies. The slow speed is operating at 435 megahertz, and it would
later go up to 920 megahertz. With the high-speed frequency, the
one-megabyte radios will run at 2.4 gigabits, and 1.5 will run at 5.7.
At 5.7, rain can be a factor, but it would have to be tropical rain,
unlike what falls in most parts of the United States.

******

SESSION IV. IMAGE CAPTURE, TEXT CAPTURE, OVERVIEW OF TEXT AND
IMAGE STORAGE FORMATS

William HOOTON, vice president of operations, I-NET, moderated this session.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
KENNEY * Factors influencing development of CXP * Advantages of using
digital technology versus photocopy and microfilm * A primary goal of
CXP; publishing challenges * Characteristics of copies printed * Quality
of samples achieved in image capture * Several factors to be considered
in choosing scanning * Emphasis of CXP on timely and cost-effective
production of black-and-white printed facsimiles * Results of producing
microfilm from digital files * Advantages of creating microfilm * Details
concerning production * Costs * Role of digital technology in library
preservation *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Anne KENNEY, associate director, Department of Preservation and
Conservation, Cornell University, opened her talk by observing that the
Cornell Xerox Project (CXP) has been guided by the assumption that the
ability to produce printed facsimiles or to replace paper with paper
would be important, at least for the present generation of users and
equipment. She described three factors that influenced development of
the project: 1) Because the project has emphasized the preservation of
deteriorating brittle books, the quality of what was produced had to be
sufficiently high to return a paper replacement to the shelf. CXP was
only interested in using: 2) a system that was cost-effective, which
meant that it had to be cost-competitive with the processes currently
available, principally photocopy and microfilm, and 3) new or currently
available product hardware and software.

KENNEY described the advantages that using digital technology offers over
both photocopy and microfilm: 1) The potential exists to create a higher
quality reproduction of a deteriorating original than conventional
light-lens technology. 2) Because a digital image is an encoded
representation, it can be reproduced again and again with no resulting
loss of quality, as opposed to the situation with light-lens processes,
in which there is discernible difference between a second and a
subsequent generation of an image. 3) A digital image can be manipulated
in a number of ways to improve image capture; for example, Xerox has
developed a windowing application that enables one to capture a page
containing both text and illustrations in a manner that optimizes the
reproduction of both. (With light-lens technology, one must choose which
to optimize, text or the illustration; in preservation microfilming, the
current practice is to shoot an illustrated page twice, once to highlight
the text and the second time to provide the best capture for the
illustration.) 4) A digital image can also be edited, density levels
adjusted to remove underlining and stains, and to increase legibility for
faint documents. 5) On-screen inspection can take place at the time of
initial setup and adjustments made prior to scanning, factors that
substantially reduce the number of retakes required in quality control.

A primary goal of CXP has been to evaluate the paper output printed on
the Xerox DocuTech, a high-speed printer that produces 600-dpi pages from
scanned images at a rate of 135 pages a minute. KENNEY recounted several
publishing challenges to represent faithful and legible reproductions of
the originals that the 600-dpi copy for the most part successfully
captured. For example, many of the deteriorating volumes in the project
were heavily illustrated with fine line drawings or halftones or came in
languages such as Japanese, in which the buildup of characters comprised
of varying strokes is difficult to reproduce at lower resolutions; a
surprising number of them came with annotations and mathematical
formulas, which it was critical to be able to duplicate exactly.

KENNEY noted that 1) the copies are being printed on paper that meets the
ANSI standards for performance, 2) the DocuTech printer meets the machine
and toner requirements for proper adhesion of print to page, as described
by the National Archives, and thus 3) paper product is considered to be
the archival equivalent of preservation photocopy.

KENNEY then discussed several samples of the quality achieved in the
project that had been distributed in a handout, for example, a copy of a
print-on-demand version of the 1911 Reed lecture on the steam turbine,
which contains halftones, line drawings, and illustrations embedded in
text; the first four loose pages in the volume compared the capture
capabilities of scanning to photocopy for a standard test target, the
IEEE standard 167A 1987 test chart. In all instances scanning proved
superior to photocopy, though only slightly more so in one.

Conceding the simplistic nature of her review of the quality of scanning
to photocopy, KENNEY described it as one representation of the kinds of
settings that could be used with scanning capabilities on the equipment
CXP uses. KENNEY also pointed out that CXP investigated the quality
achieved with binary scanning only, and noted the great promise in gray
scale and color scanning, whose advantages and disadvantages need to be
examined. She argued further that scanning resolutions and file formats
can represent a complex trade-off between the time it takes to capture
material, file size, fidelity to the original, and on-screen display; and
printing and equipment availability. All these factors must be taken
into consideration.

CXP placed primary emphasis on the production in a timely and
cost-effective manner of printed facsimiles that consisted largely of
black-and-white text. With binary scanning, large files may be
compressed efficiently and in a lossless manner (i.e., no data is lost in
the process of compressing [and decompressing] an image--the exact
bit-representation is maintained) using Group 4 CCITT (i.e., the French
acronym for International Consultative Committee for Telegraph and
Telephone) compression. CXP was getting compression ratios of about
forty to one. Gray-scale compression, which primarily uses JPEG, is much
less economical and can represent a lossy compression (i.e., not
lossless), so that as one compresses and decompresses, the illustration
is subtly changed. While binary files produce a high-quality printed
version, it appears 1) that other combinations of spatial resolution with
gray and/or color hold great promise as well, and 2) that gray scale can
represent a tremendous advantage for on-screen viewing. The quality
associated with binary and gray scale also depends on the equipment used.
For instance, binary scanning produces a much better copy on a binary
printer.

Among CXP's findings concerning the production of microfilm from digital
files, KENNEY reported that the digital files for the same Reed lecture
were used to produce sample film using an electron beam recorder. The
resulting film was faithful to the image capture of the digital files,
and while CXP felt that the text and image pages represented in the Reed
lecture were superior to that of the light-lens film, the resolution
readings for the 600 dpi were not as high as standard microfilming.
KENNEY argued that the standards defined for light-lens technology are
not totally transferable to a digital environment. Moreover, they are
based on definition of quality for a preservation copy. Although making
this case will prove to be a long, uphill struggle, CXP plans to continue
to investigate the issue over the course of the next year.

KENNEY concluded this portion of her talk with a discussion of the
advantages of creating film: it can serve as a primary backup and as a
preservation master to the digital file; it could then become the print
or production master and service copies could be paper, film, optical
disks, magnetic media, or on-screen display.

Finally, KENNEY presented details re production:

* Development and testing of a moderately-high resolution production
scanning workstation represented a third goal of CXP; to date, 1,000
volumes have been scanned, or about 300,000 images.

* The resulting digital files are stored and used to produce
hard-copy replacements for the originals and additional prints on
demand; although the initial costs are high, scanning technology
offers an affordable means for reformatting brittle material.

* A technician in production mode can scan 300 pages per hour when
performing single-sheet scanning, which is a necessity when working
with truly brittle paper; this figure is expected to increase
significantly with subsequent iterations of the software from Xerox;
a three-month time-and-cost study of scanning found that the average
300-page book would take about an hour and forty minutes to scan
(this figure included the time for setup, which involves keying in
primary bibliographic data, going into quality control mode to
define page size, establishing front-to-back registration, and
scanning sample pages to identify a default range of settings for
the entire book--functions not dissimilar to those performed by
filmers or those preparing a book for photocopy).

* The final step in the scanning process involved rescans, which
happily were few and far between, representing well under 1 percent
of the total pages scanned.

In addition to technician time, CXP costed out equipment, amortized over
four years, the cost of storing and refreshing the digital files every
four years, and the cost of printing and binding, book-cloth binding, a
paper reproduction. The total amounted to a little under $65 per single
300-page volume, with 30 percent overhead included--a figure competitive
with the prices currently charged by photocopy vendors.

Of course, with scanning, in addition to the paper facsimile, one is left
with a digital file from which subsequent copies of the book can be
produced for a fraction of the cost of photocopy, with readers afforded
choices in the form of these copies.

KENNEY concluded that digital technology offers an electronic means for a
library preservation effort to pay for itself. If a brittle-book program
included the means of disseminating reprints of books that are in demand
by libraries and researchers alike, the initial investment in capture
could be recovered and used to preserve additional but less popular
books. She disclosed that an economic model for a self-sustaining
program could be developed for CXP's report to the Commission on
Preservation and Access (CPA).

KENNEY stressed that the focus of CXP has been on obtaining high quality
in a production environment. The use of digital technology is viewed as
an affordable alternative to other reformatting options.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ANDRE * Overview and history of NATDP * Various agricultural CD-ROM
products created inhouse and by service bureaus * Pilot project on
Internet transmission * Additional products in progress *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Pamela ANDRE, associate director for automation, National Agricultural
Text Digitizing Program (NATDP), National Agricultural Library (NAL),
presented an overview of NATDP, which has been underway at NAL the last
four years, before Judith ZIDAR discussed the technical details. ANDRE
defined agricultural information as a broad range of material going from
basic and applied research in the hard sciences to the one-page pamphlets
that are distributed by the cooperative state extension services on such
things as how to grow blueberries.

NATDP began in late 1986 with a meeting of representatives from the
land-grant library community to deal with the issue of electronic
information. NAL and forty-five of these libraries banded together to
establish this project--to evaluate the technology for converting what
were then source documents in paper form into electronic form, to provide
access to that digital information, and then to distribute it.
Distributing that material to the community--the university community as
well as the extension service community, potentially down to the county
level--constituted the group's chief concern.

Since January 1988 (when the microcomputer-based scanning system was
installed at NAL), NATDP has done a variety of things, concerning which
ZIDAR would provide further details. For example, the first technology
considered in the project's discussion phase was digital videodisc, which
indicates how long ago it was conceived.

Over the four years of this project, four separate CD-ROM products on
four different agricultural topics were created, two at a
scanning-and-OCR station installed at NAL, and two by service bureaus.
Thus, NATDP has gained comparative information in terms of those relative
costs. Each of these products contained the full ASCII text as well as
page images of the material, or between 4,000 and 6,000 pages of material
on these disks. Topics included aquaculture, food, agriculture and
science (i.e., international agriculture and research), acid rain, and
Agent Orange, which was the final product distributed (approximately
eighteen months before the Workshop).

The third phase of NATDP focused on delivery mechanisms other than
CD-ROM. At the suggestion of Clifford LYNCH, who was a technical
consultant to the project at this point, NATDP became involved with the
Internet and initiated a project with the help of North Carolina State
University, in which fourteen of the land-grant university libraries are
transmitting digital images over the Internet in response to interlibrary
loan requests--a topic for another meeting. At this point, the pilot
project had been completed for about a year and the final report would be
available shortly after the Workshop. In the meantime, the project's
success had led to its extension. (ANDRE noted that one of the first
things done under the program title was to select a retrieval package to
use with subsequent products; Windows Personal Librarian was the package
of choice after a lengthy evaluation.)

Three additional products had been planned and were in progress:

1) An arrangement with the American Society of Agronomy--a
professional society that has published the Agronomy Journal since
about 1908--to scan and create bit-mapped images of its journal.
ASA granted permission first to put and then to distribute this
material in electronic form, to hold it at NAL, and to use these
electronic images as a mechanism to deliver documents or print out
material for patrons, among other uses. Effectively, NAL has the
right to use this material in support of its program.
(Significantly, this arrangement offers a potential cooperative
model for working with other professional societies in agriculture
to try to do the same thing--put the journals of particular interest
to agriculture research into electronic form.)

2) An extension of the earlier product on aquaculture.

3) The George Washington Carver Papers--a joint project with
Tuskegee University to scan and convert from microfilm some 3,500
images of Carver's papers, letters, and drawings.

It was anticipated that all of these products would appear no more than
six months after the Workshop.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ZIDAR * (A separate arena for scanning) * Steps in creating a database *
Image capture, with and without performing OCR * Keying in tracking data
* Scanning, with electronic and manual tracking * Adjustments during
scanning process * Scanning resolutions * Compression * De-skewing and
filtering * Image capture from microform: the papers and letters of
George Washington Carver * Equipment used for a scanning system *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Judith ZIDAR, coordinator, National Agricultural Text Digitizing Program
(NATDP), National Agricultural Library (NAL), illustrated the technical
details of NATDP, including her primary responsibility, scanning and
creating databases on a topic and putting them on CD-ROM.

(ZIDAR remarked a separate arena from the CD-ROM projects, although the
processing of the material is nearly identical, in which NATDP is also
scanning material and loading it on a Next microcomputer, which in turn
is linked to NAL's integrated library system. Thus, searches in NAL's
bibliographic database will enable people to pull up actual page images
and text for any documents that have been entered.)

In accordance with the session's topic, ZIDAR focused her illustrated
talk on image capture, offering a primer on the three main steps in the
process: 1) assemble the printed publications; 2) design the database
(database design occurs in the process of preparing the material for
scanning; this step entails reviewing and organizing the material,
defining the contents--what will constitute a record, what kinds of
fields will be captured in terms of author, title, etc.); 3) perform a
certain amount of markup on the paper publications. NAL performs this
task record by record, preparing work sheets or some other sort of
tracking material and designing descriptors and other enhancements to be
added to the data that will not be captured from the printed publication.
Part of this process also involves determining NATDP's file and directory
structure: NATDP attempts to avoid putting more than approximately 100
images in a directory, because placing more than that on a CD-ROM would
reduce the access speed.

This up-front process takes approximately two weeks for a
6,000-7,000-page database. The next step is to capture the page images.
How long this process takes is determined by the decision whether or not
to perform OCR. Not performing OCR speeds the process, whereas text
capture requires greater care because of the quality of the image: it
has to be straighter and allowance must be made for text on a page, not
just for the capture of photographs.

NATDP keys in tracking data, that is, a standard bibliographic record
including the title of the book and the title of the chapter, which will
later either become the access information or will be attached to the
front of a full-text record so that it is searchable.

Images are scanned from a bound or unbound publication, chiefly from
bound publications in the case of NATDP, however, because often they are
the only copies and the publications are returned to the shelves. NATDP
usually scans one record at a time, because its database tracking system
tracks the document in that way and does not require further logical
separating of the images. After performing optical character
recognition, NATDP moves the images off the hard disk and maintains a
volume sheet. Though the system tracks electronically, all the
processing steps are also tracked manually with a log sheet.

ZIDAR next illustrated the kinds of adjustments that one can make when
scanning from paper and microfilm, for example, redoing images that need
special handling, setting for dithering or gray scale, and adjusting for
brightness or for the whole book at one time.

NATDP is scanning at 300 dots per inch, a standard scanning resolution.
Though adequate for capturing text that is all of a standard size, 300
dpi is unsuitable for any kind of photographic material or for very small
text. Many scanners allow for different image formats, TIFF, of course,
being a de facto standard. But if one intends to exchange images with
other people, the ability to scan other image formats, even if they are
less common, becomes highly desirable.

CCITT Group 4 is the standard compression for normal black-and-white
images, JPEG for gray scale or color. ZIDAR recommended 1) using the
standard compressions, particularly if one attempts to make material
available and to allow users to download images and reuse them from
CD-ROMs; and 2) maintaining the ability to output an uncompressed image,
because in image exchange uncompressed images are more likely to be able
to cross platforms.

ZIDAR emphasized the importance of de-skewing and filtering as
requirements on NATDP's upgraded system. For instance, scanning bound
books, particularly books published by the federal government whose pages
are skewed, and trying to scan them straight if OCR is to be performed,
is extremely time-consuming. The same holds for filtering of
poor-quality or older materials.

ZIDAR described image capture from microform, using as an example three
reels from a sixty-seven-reel set of the papers and letters of George
Washington Carver that had been produced by Tuskegee University. These
resulted in approximately 3,500 images, which NATDP had had scanned by
its service contractor, Science Applications International Corporation
(SAIC). NATDP also created bibliographic records for access. (NATDP did
not have such specialized equipment as a microfilm scanner.

Unfortunately, the process of scanning from microfilm was not an
unqualified success, ZIDAR reported: because microfilm frame sizes vary,
occasionally some frames were missed, which without spending much time
and money could not be recaptured.

OCR could not be performed from the scanned images of the frames. The
bleeding in the text simply output text, when OCR was run, that could not
even be edited. NATDP tested for negative versus positive images,
landscape versus portrait orientation, and single- versus dual-page
microfilm, none of which seemed to affect the quality of the image; but
also on none of them could OCR be performed.

In selecting the microfilm they would use, therefore, NATDP had other
factors in mind. ZIDAR noted two factors that influenced the quality of
the images: 1) the inherent quality of the original and 2) the amount of
size reduction on the pages.

The Carver papers were selected because they are informative and visually
interesting, treat a single subject, and are valuable in their own right.
The images were scanned and divided into logical records by SAIC, then
delivered, and loaded onto NATDP's system, where bibliographic
information taken directly from the images was added. Scanning was
completed in summer 1991 and by the end of summer 1992 the disk was
scheduled to be published.

Problems encountered during processing included the following: Because
the microfilm scanning had to be done in a batch, adjustment for
individual page variations was not possible. The frame size varied on
account of the nature of the material, and therefore some of the frames
were missed while others were just partial frames. The only way to go
back and capture this material was to print out the page with the
microfilm reader from the missing frame and then scan it in from the
page, which was extremely time-consuming. The quality of the images
scanned from the printout of the microfilm compared unfavorably with that
of the original images captured directly from the microfilm. The
inability to perform OCR also was a major disappointment. At the time,
computer output microfilm was unavailable to test.

The equipment used for a scanning system was the last topic addressed by
ZIDAR. The type of equipment that one would purchase for a scanning
system included: a microcomputer, at least a 386, but preferably a 486;
a large hard disk, 380 megabyte at minimum; a multi-tasking operating
system that allows one to run some things in batch in the background
while scanning or doing text editing, for example, Unix or OS/2 and,
theoretically, Windows; a high-speed scanner and scanning software that
allows one to make the various adjustments mentioned earlier; a
high-resolution monitor (150 dpi ); OCR software and hardware to perform
text recognition; an optical disk subsystem on which to archive all the
images as the processing is done; file management and tracking software.

ZIDAR opined that the software one purchases was more important than the
hardware and might also cost more than the hardware, but it was likely to
prove critical to the success or failure of one's system. In addition to
a stand-alone scanning workstation for image capture, then, text capture
requires one or two editing stations networked to this scanning station
to perform editing. Editing the text takes two or three times as long as
capturing the images.

Finally, ZIDAR stressed the importance of buying an open system that allows
for more than one vendor, complies with standards, and can be upgraded.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
WATERS *Yale University Library's master plan to convert microfilm to
digital imagery (POB) * The place of electronic tools in the library of
the future * The uses of images and an image library * Primary input from
preservation microfilm * Features distinguishing POB from CXP and key
hypotheses guiding POB * Use of vendor selection process to facilitate
organizational work * Criteria for selecting vendor * Finalists and
results of process for Yale * Key factor distinguishing vendors *
Components, design principles, and some estimated costs of POB * Role of
preservation materials in developing imaging market * Factors affecting
quality and cost * Factors affecting the usability of complex documents
in image form *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Donald WATERS, head of the Systems Office, Yale University Library,
reported on the progress of a master plan for a project at Yale to
convert microfilm to digital imagery, Project Open Book (POB). Stating
that POB was in an advanced stage of planning, WATERS detailed, in
particular, the process of selecting a vendor partner and several key
issues under discussion as Yale prepares to move into the project itself.
He commented first on the vision that serves as the context of POB and
then described its purpose and scope.

WATERS sees the library of the future not necessarily as an electronic
library but as a place that generates, preserves, and improves for its
clients ready access to both intellectual and physical recorded
knowledge. Electronic tools must find a place in the library in the
context of this vision. Several roles for electronic tools include
serving as: indirect sources of electronic knowledge or as "finding"
aids (the on-line catalogues, the article-level indices, registers for
documents and archives); direct sources of recorded knowledge; full-text
images; and various kinds of compound sources of recorded knowledge (the
so-called compound documents of Hypertext, mixed text and image,
mixed-text image format, and multimedia).

POB is looking particularly at images and an image library, the uses to
which images will be put (e.g., storage, printing, browsing, and then use
as input for other processes), OCR as a subsequent process to image
capture, or creating an image library, and also possibly generating
microfilm.

While input will come from a variety of sources, POB is considering
especially input from preservation microfilm. A possible outcome is that
the film and paper which provide the input for the image library
eventually may go off into remote storage, and that the image library may
be the primary access tool.

The purpose and scope of POB focus on imaging. Though related to CXP,
POB has two features which distinguish it: 1) scale--conversion of
10,000 volumes into digital image form; and 2) source--conversion from
microfilm. Given these features, several key working hypotheses guide
POB, including: 1) Since POB is using microfilm, it is not concerned with
the image library as a preservation medium. 2) Digital imagery can improve
access to recorded knowledge through printing and network distribution at
a modest incremental cost of microfilm. 3) Capturing and storing documents
in a digital image form is necessary to further improvements in access.
(POB distinguishes between the imaging, digitizing process and OCR,
which at this stage it does not plan to perform.)

Currently in its first or organizational phase, POB found that it could
use a vendor selection process to facilitate a good deal of the
organizational work (e.g., creating a project team and advisory board,
confirming the validity of the plan, establishing the cost of the project
and a budget, selecting the materials to convert, and then raising the
necessary funds).

POB developed numerous selection criteria, including: a firm committed
to image-document management, the ability to serve as systems integrator
in a large-scale project over several years, interest in developing the
requisite software as a standard rather than a custom product, and a
willingness to invest substantial resources in the project itself.

Two vendors, DEC and Xerox, were selected as finalists in October 1991,
and with the support of the Commission on Preservation and Access, each
was commissioned to generate a detailed requirements analysis for the
project and then to submit a formal proposal for the completion of the
project, which included a budget and costs. The terms were that POB would
pay the loser. The results for Yale of involving a vendor included:
broad involvement of Yale staff across the board at a relatively low
cost, which may have long-term significance in carrying out the project
(twenty-five to thirty university people are engaged in POB); better
understanding of the factors that affect corporate response to markets
for imaging products; a competitive proposal; and a more sophisticated
view of the imaging markets.

The most important factor that distinguished the vendors under
consideration was their identification with the customer. The size and
internal complexity of the company also was an important factor. POB was
looking at large companies that had substantial resources. In the end,
the process generated for Yale two competitive proposals, with Xerox's
the clear winner. WATERS then described the components of the proposal,
the design principles, and some of the costs estimated for the process.

Components are essentially four: a conversion subsystem, a
network-accessible storage subsystem for 10,000 books (and POB expects
200 to 600 dpi storage), browsing stations distributed on the campus
network, and network access to the image printers.

Among the design principles, POB wanted conversion at the highest
possible resolution. Assuming TIFF files, TIFF files with Group 4
compression, TCP/IP, and ethernet network on campus, POB wanted a
client-server approach with image documents distributed to the
workstations and made accessible through native workstation interfaces
such as Windows. POB also insisted on a phased approach to
implementation: 1) a stand-alone, single-user, low-cost entry into the
business with a workstation focused on conversion and allowing POB to
explore user access; 2) movement into a higher-volume conversion with
network-accessible storage and multiple access stations; and 3) a
high-volume conversion, full-capacity storage, and multiple browsing
stations distributed throughout the campus.

The costs proposed for start-up assumed the existence of the Yale network
and its two DocuTech image printers. Other start-up costs are estimated
at $1 million over the three phases. At the end of the project, the annual
operating costs estimated primarily for the software and hardware proposed
come to about $60,000, but these exclude costs for labor needed in the
conversion process, network and printer usage, and facilities management.

Finally, the selection process produced for Yale a more sophisticated
view of the imaging markets: the management of complex documents in
image form is not a preservation problem, not a library problem, but a
general problem in a broad, general industry. Preservation materials are
useful for developing that market because of the qualities of the
material. For example, much of it is out of copyright. The resolution
of key issues such as the quality of scanning and image browsing also
will affect development of that market.

The technology is readily available but changing rapidly. In this
context of rapid change, several factors affect quality and cost, to
which POB intends to pay particular attention, for example, the various
levels of resolution that can be achieved. POB believes it can bring
resolution up to 600 dpi, but an interpolation process from 400 to 600 is
more likely. The variation quality in microfilm will prove to be a
highly important factor. POB may reexamine the standards used to film in
the first place by looking at this process as a follow-on to microfilming.

Other important factors include: the techniques available to the
operator for handling material, the ways of integrating quality control
into the digitizing work flow, and a work flow that includes indexing and
storage. POB's requirement was to be able to deal with quality control
at the point of scanning. Thus, thanks to Xerox, POB anticipates having
a mechanism which will allow it not only to scan in batch form, but to
review the material as it goes through the scanner and control quality
from the outset.

The standards for measuring quality and costs depend greatly on the uses
of the material, including subsequent OCR, storage, printing, and
browsing. But especially at issue for POB is the facility for browsing.
This facility, WATERS said, is perhaps the weakest aspect of imaging
technology and the most in need of development.

A variety of factors affect the usability of complex documents in image
form, among them: 1) the ability of the system to handle the full range
of document types, not just monographs but serials, multi-part
monographs, and manuscripts; 2) the location of the database of record
for bibliographic information about the image document, which POB wants
to enter once and in the most useful place, the on-line catalog; 3) a
document identifier for referencing the bibliographic information in one
place and the images in another; 4) the technique for making the basic
internal structure of the document accessible to the reader; and finally,
5) the physical presentation on the CRT of those documents. POB is ready
to complete this phase now. One last decision involves deciding which
material to scan.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DISCUSSION * TIFF files constitute de facto standard * NARA's experience
with image conversion software and text conversion * RFC 1314 *
Considerable flux concerning available hardware and software solutions *
NAL through-put rate during scanning * Window management questions *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

In the question-and-answer period that followed WATERS's presentation,
the following points emerged:

* ZIDAR's statement about using TIFF files as a standard meant de
facto standard. This is what most people use and typically exchange
with other groups, across platforms, or even occasionally across
display software.

* HOLMES commented on the unsuccessful experience of NARA in
attempting to run image-conversion software or to exchange between
applications: What are supposedly TIFF files go into other software
that is supposed to be able to accept TIFF but cannot recognize the
format and cannot deal with it, and thus renders the exchange
useless. Re text conversion, he noted the different recognition
rates obtained by substituting the make and model of scanners in
NARA's recent test of an "intelligent" character-recognition product
for a new company. In the selection of hardware and software,
HOLMES argued, software no longer constitutes the overriding factor
it did until about a year ago; rather it is perhaps important to
look at both now.

* Danny Cohen and Alan Katz of the University of Southern California
Information Sciences Institute began circulating as an Internet RFC
(RFC 1314) about a month ago a standard for a TIFF interchange
format for Internet distribution of monochrome bit-mapped images,
which LYNCH said he believed would be used as a de facto standard.

* FLEISCHHAUER's impression from hearing these reports and thinking
about AM's experience was that there is considerable flux concerning
available hardware and software solutions. HOOTON agreed and
commented at the same time on ZIDAR's statement that the equipment
employed affects the results produced. One cannot draw a complete
conclusion by saying it is difficult or impossible to perform OCR
from scanning microfilm, for example, with that device, that set of
parameters, and system requirements, because numerous other people
are accomplishing just that, using other components, perhaps.
HOOTON opined that both the hardware and the software were highly
important. Most of the problems discussed today have been solved in
numerous different ways by other people. Though it is good to be
cognizant of various experiences, this is not to say that it will
always be thus.

* At NAL, the through-put rate of the scanning process for paper,
page by page, performing OCR, ranges from 300 to 600 pages per day;
not performing OCR is considerably faster, although how much faster
is not known. This is for scanning from bound books, which is much
slower.

* WATERS commented on window management questions: DEC proposed an
X-Windows solution which was problematical for two reasons. One was
POB's requirement to be able to manipulate images on the workstation
and bring them down to the workstation itself and the other was
network usage.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THOMA * Illustration of deficiencies in scanning and storage process *
Image quality in this process * Different costs entailed by better image
quality * Techniques for overcoming various de-ficiencies: fixed
thresholding, dynamic thresholding, dithering, image merge * Page edge
effects *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

George THOMA, chief, Communications Engineering Branch, National Library
of Medicine (NLM), illustrated several of the deficiencies discussed by
the previous speakers. He introduced the topic of special problems by
noting the advantages of electronic imaging. For example, it is regenerable
because it is a coded file, and real-time quality control is possible with
electronic capture, whereas in photographic capture it is not.

One of the difficulties discussed in the scanning and storage process was
image quality which, without belaboring the obvious, means different
things for maps, medical X-rays, or broadcast television. In the case of
documents, THOMA said, image quality boils down to legibility of the
textual parts, and fidelity in the case of gray or color photo print-type
material. Legibility boils down to scan density, the standard in most
cases being 300 dpi. Increasing the resolution with scanners that
perform 600 or 1200 dpi, however, comes at a cost.

Better image quality entails at least four different kinds of costs: 1)
equipment costs, because the CCD (i.e., charge-couple device) with
greater number of elements costs more; 2) time costs that translate to
the actual capture costs, because manual labor is involved (the time is
also dependent on the fact that more data has to be moved around in the
machine in the scanning or network devices that perform the scanning as
well as the storage); 3) media costs, because at high resolutions larger
files have to be stored; and 4) transmission costs, because there is just
more data to be transmitted.

But while resolution takes care of the issue of legibility in image
quality, other deficiencies have to do with contrast and elements on the
page scanned or the image that needed to be removed or clarified. Thus,
THOMA proceeded to illustrate various deficiencies, how they are
manifested, and several techniques to overcome them.

Fixed thresholding was the first technique described, suitable for
black-and-white text, when the contrast does not vary over the page. One
can have many different threshold levels in scanning devices. Thus,
THOMA offered an example of extremely poor contrast, which resulted from
the fact that the stock was a heavy red. This is the sort of image that
when microfilmed fails to provide any legibility whatsoever. Fixed
thresholding is the way to change the black-to-red contrast to the
desired black-to-white contrast.

Other examples included material that had been browned or yellowed by
age. This was also a case of contrast deficiency, and correction was
done by fixed thresholding. A final example boils down to the same
thing, slight variability, but it is not significant. Fixed thresholding
solves this problem as well. The microfilm equivalent is certainly legible,
but it comes with dark areas. Though THOMA did not have a slide of the
microfilm in this case, he did show the reproduced electronic image.

When one has variable contrast over a page or the lighting over the page
area varies, especially in the case where a bound volume has light
shining on it, the image must be processed by a dynamic thresholding
scheme. One scheme, dynamic averaging, allows the threshold level not to
be fixed but to be recomputed for every pixel from the neighboring
characteristics. The neighbors of a pixel determine where the threshold
should be set for that pixel.

THOMA showed an example of a page that had been made deficient by a
variety of techniques, including a burn mark, coffee stains, and a yellow
marker. Application of a fixed-thresholding scheme, THOMA argued, might
take care of several deficiencies on the page but not all of them.
Performing the calculation for a dynamic threshold setting, however,
removes most of the deficiencies so that at least the text is legible.

Another problem is representing a gray level with black-and-white pixels
by a process known as dithering or electronic screening. But dithering
does not provide good image quality for pure black-and-white textual
material. THOMA illustrated this point with examples. Although its
suitability for photoprint is the reason for electronic screening or
dithering, it cannot be used for every compound image. In the document
that was distributed by CXP, THOMA noticed that the dithered image of the
IEEE test chart evinced some deterioration in the text. He presented an
extreme example of deterioration in the text in which compounded
documents had to be set right by other techniques. The technique
illustrated by the present example was an image merge in which the page
is scanned twice and the settings go from fixed threshold to the
dithering matrix; the resulting images are merged to give the best
results with each technique.

THOMA illustrated how dithering is also used in nonphotographic or
nonprint materials with an example of a grayish page from a medical text,
which was reproduced to show all of the gray that appeared in the
original. Dithering provided a reproduction of all the gray in the
original of another example from the same text.

THOMA finally illustrated the problem of bordering, or page-edge,
effects. Books and bound volumes that are placed on a photocopy machine
or a scanner produce page-edge effects that are undesirable for two
reasons: 1) the aesthetics of the image; after all, if the image is to
be preserved, one does not necessarily want to keep all of its
deficiencies; 2) compression (with the bordering problem THOMA
illustrated, the compression ratio deteriorated tremendously). One way
to eliminate this more serious problem is to have the operator at the
point of scanning window the part of the image that is desirable and
automatically turn all of the pixels out of that picture to white.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
FLEISCHHAUER * AM's experience with scanning bound materials * Dithering
*
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Carl FLEISCHHAUER, coordinator, American Memory, Library of Congress,
reported AM's experience with scanning bound materials, which he likened
to the problems involved in using photocopying machines. Very few
devices in the industry offer book-edge scanning, let alone book cradles.
The problem may be unsolvable, FLEISCHHAUER said, because a large enough
market does not exist for a preservation-quality scanner. AM is using a
Kurzweil scanner, which is a book-edge scanner now sold by Xerox.

Devoting the remainder of his brief presentation to dithering,
FLEISCHHAUER related AM's experience with a contractor who was using
unsophisticated equipment and software to reduce moire patterns from
printed halftones. AM took the same image and used the dithering
algorithm that forms part of the same Kurzweil Xerox scanner; it
disguised moire patterns much more effectively.

FLEISCHHAUER also observed that dithering produces a binary file which is
useful for numerous purposes, for example, printing it on a laser printer
without having to "re-halftone" it. But it tends to defeat efficient
compression, because the very thing that dithers to reduce moire patterns
also tends to work against compression schemes. AM thought the
difference in image quality was worth it.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DISCUSSION * Relative use as a criterion for POB's selection of books to
be converted into digital form *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

During the discussion period, WATERS noted that one of the criteria for
selecting books among the 10,000 to be converted into digital image form
would be how much relative use they would receive--a subject still
requiring evaluation. The challenge will be to understand whether
coherent bodies of material will increase usage or whether POB should
seek material that is being used, scan that, and make it more accessible.
POB might decide to digitize materials that are already heavily used, in
order to make them more accessible and decrease wear on them. Another
approach would be to provide a large body of intellectually coherent
material that may be used more in digital form than it is currently used
in microfilm. POB would seek material that was out of copyright.

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
BARONAS * Origin and scope of AIIM * Types of documents produced in
AIIM's standards program * Domain of AIIM's standardization work * AIIM's
structure * TC 171 and MS23 * Electronic image management standards *
Categories of EIM standardization where AIIM standards are being
developed *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Jean BARONAS, senior manager, Department of Standards and Technology,
Association for Information and Image Management (AIIM), described the
not-for-profit association and the national and international programs
for standardization in which AIIM is active.

Accredited for twenty-five years as the nation's standards development
organization for document image management, AIIM began life in a library
community developing microfilm standards. Today the association
maintains both its library and business-image management standardization
activities--and has moved into electronic image-management
standardization (EIM).

BARONAS defined the program's scope. AIIM deals with: 1) the
terminology of standards and of the technology it uses; 2) methods of
measurement for the systems, as well as quality; 3) methodologies for
users to evaluate and measure quality; 4) the features of apparatus used
to manage and edit images; and 5) the procedures used to manage images.

BARONAS noted that three types of documents are produced in the AIIM
standards program: the first two, accredited by the American National
Standards Institute (ANSI), are standards and standard recommended
practices. Recommended practices differ from standards in that they
contain more tutorial information. A technical report is not an ANSI
standard. Because AIIM's policies and procedures for developing
standards are approved by ANSI, its standards are labeled ANSI/AIIM,
followed by the number and title of the standard.

BARONAS then illustrated the domain of AIIM's standardization work. For
example, AIIM is the administrator of the U.S. Technical Advisory Group
(TAG) to the International Standards Organization's (ISO) technical
committee, TC l7l Micrographics and Optical Memories for Document and
Image Recording, Storage, and Use. AIIM officially works through ANSI in
the international standardization process.

BARONAS described AIIM's structure, including its board of directors, its
standards board of twelve individuals active in the image-management
industry, its strategic planning and legal admissibility task forces, and
its National Standards Council, which is comprised of the members of a
number of organizations who vote on every AIIM standard before it is
published. BARONAS pointed out that AIIM's liaisons deal with numerous
other standards developers, including the optical disk community, office
and publishing systems, image-codes-and-character set committees, and the
National Information Standards Organization (NISO).

BARONAS illustrated the procedures of TC l7l, which covers all aspects of
image management. When AIIM's national program has conceptualized a new
project, it is usually submitted to the international level, so that the
member countries of TC l7l can simultaneously work on the development of
the standard or the technical report. BARONAS also illustrated a classic
microfilm standard, MS23, which deals with numerous imaging concepts that
apply to electronic imaging. Originally developed in the l970s, revised
in the l980s, and revised again in l991, this standard is scheduled for
another revision. MS23 is an active standard whereby users may propose
new density ranges and new methods of evaluating film images in the
standard's revision.

BARONAS detailed several electronic image-management standards, for
instance, ANSI/AIIM MS44, a quality-control guideline for scanning 8.5"
by 11" black-and-white office documents. This standard is used with the
IEEE fax image--a continuous tone photographic image with gray scales,
text, and several continuous tone pictures--and AIIM test target number
2, a representative document used in office document management.

BARONAS next outlined the four categories of EIM standardization in which
AIIM standards are being developed: transfer and retrieval, evaluation,
optical disc and document scanning applications, and design and
conversion of documents. She detailed several of the main projects of
each: 1) in the category of image transfer and retrieval, a bi-level
image transfer format, ANSI/AIIM MS53, which is a proposed standard that
describes a file header for image transfer between unlike systems when
the images are compressed using G3 and G4 compression; 2) the category of
image evaluation, which includes the AIIM-proposed TR26 tutorial on image
resolution (this technical report will treat the differences and
similarities between classical or photographic and electronic imaging);
3) design and conversion, which includes a proposed technical report
called "Forms Design Optimization for EIM" (this report considers how
general-purpose business forms can be best designed so that scanning is
optimized; reprographic characteristics such as type, rules, background,
tint, and color will likewise be treated in the technical report); 4)
disk and document scanning applications includes a project a) on planning
platters and disk management, b) on generating an application profile for
EIM when images are stored and distributed on CD-ROM, and c) on
evaluating SCSI2, and how a common command set can be generated for SCSI2
so that document scanners are more easily integrated. (ANSI/AIIM MS53
will also apply to compressed images.)

******

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
BATTIN * The implications of standards for preservation * A major
obstacle to successful cooperation * A hindrance to access in the digital
environment * Standards a double-edged sword for those concerned with the
preservation of the human record * Near-term prognosis for reliable
archival standards * Preservation concerns for electronic media * Need
for reconceptualizing our preservation principles * Standards in the real
world and the politics of reproduction * Need to redefine the concept of
archival and to begin to think in terms of life cycles * Cooperation and
the La Guardia Eight * Concerns generated by discussions on the problems
of preserving text and image * General principles to be adopted in a
world without standards *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Patricia BATTIN, president, the Commission on Preservation and Access
(CPA), addressed the implications of standards for preservation. She
listed several areas where the library profession and the analog world of
the printed book had made enormous contributions over the past hundred
years--for example, in bibliographic formats, binding standards, and, most
important, in determining what constitutes longevity or archival quality.

Although standards have lightened the preservation burden through the
development of national and international collaborative programs,
nevertheless, a pervasive mistrust of other people's standards remains a
major obstacle to successful cooperation, BATTIN said.

The zeal to achieve perfection, regardless of the cost, has hindered
rather than facilitated access in some instances, and in the digital
environment, where no real standards exist, has brought an ironically
just reward.

BATTIN argued that standards are a double-edged sword for those concerned
with the preservation of the human record, that is, the provision of
access to recorded knowledge in a multitude of media as far into the
future as possible. Standards are essential to facilitate
interconnectivity and access, but, BATTIN said, as LYNCH pointed out
yesterday, if set too soon they can hinder creativity, expansion of
capability, and the broadening of access. The characteristics of
standards for digital imagery differ radically from those for analog
imagery. And the nature of digital technology implies continuing
volatility and change. To reiterate, precipitous standard-setting can
inhibit creativity, but delayed standard-setting results in chaos.

Since in BATTIN'S opinion the near-term prognosis for reliable archival
standards, as defined by librarians in the analog world, is poor, two
alternatives remain: standing pat with the old technology, or
reconceptualizing.

Preservation concerns for electronic media fall into two general domains.
One is the continuing assurance of access to knowledge originally
generated, stored, disseminated, and used in electronic form. This
domain contains several subdivisions, including 1) the closed,
proprietary systems discussed the previous day, bundled information such
as electronic journals and government agency records, and electronically
produced or captured raw data; and 2) the application of digital
technologies to the reformatting of materials originally published on a
deteriorating analog medium such as acid paper or videotape.

The preservation of electronic media requires a reconceptualizing of our
preservation principles during a volatile, standardless transition which
may last far longer than any of us envision today. BATTIN urged the
necessity of shifting focus from assessing, measuring, and setting
standards for the permanence of the medium to the concept of managing
continuing access to information stored on a variety of media and
requiring a variety of ever-changing hardware and software for access--a
fundamental shift for the library profession.

BATTIN offered a primer on how to move forward with reasonable confidence
in a world without standards. Her comments fell roughly into two sections:
1) standards in the real world and 2) the politics of reproduction.

In regard to real-world standards, BATTIN argued the need to redefine the
concept of archive and to begin to think in terms of life cycles. In


 


Back to Full Books