Some people eat, sleep and chew gum, I do genealogy and write...

Sunday, September 1, 2019

The Ultimate Digital Preservation Guide, Part Twelve: How to Begin to Take Archive Quality Photos

Making archive quality images is not as simple as using a scanner or randomly clicking a camera to make a digital image. When you start being involved with archive quality you soon learn that there are three ways to look at the "quality" or resolution of the image: DPI, PPI, and LPI. (Some of this is a repetition, but it is appropriate at this point to review some of the things from previous posts).

DPI is the acronym for dots per inch. It is a unit of measure for the number of discrete dots used by a dot matrix printer or other devices such as laser or inkjet printers that make images using dots of ink or toner. The most common measure is 300 dpi, the standard for inexpensive laser printers. At 300 dpi, the dots on a page of text cannot be seen without magnification. Higher resolution is possible from more expensive machines and 600 and 1200 or even higher dpi printers are readily available for a price.

PPI is the acronym for pixels per inch. Camera sensors are usually measured in millions of pixels or Megapixels. A Megapixel is a unit of graphic resolution equivalent to one million or more specifically, 1,048,576 pixels. Each pixel has essentially the same function as the dots created by ink or toner-based printers. There are currently digital cameras that are readily available to consumers with the money to pay for them with sensors that have in excess of 100 Megapixels. For comparison, the current Apple iPhone camera has a 12 Megapixel sensor.

LPI is the acronym for lines per inch. This is a term most frequently used by printers and some engineers to indicate the resolution of printed documents. A line consists of halftones (dots of different shades of gray to black), that are made by physical ink dots by a printing device to create different tones. LPI is a measure of the distance between the lines in a halftone grid. Although still in use, LPI is mostly an out-of-date measurement since outside of some commercial printing, there are almost no halftone printers still in use.

It may seem obvious but for the purpose of digitizing original documents and records, the "resolution" of the original will always determine the upper limit of the resolution of a copy whether digital or photocopy. This is the ultimate reason why copies of copies always decline in resolution.

The main challenge of discussing "resolution" in the context of digitizing records and documents is that none of these measurement systems addresses the issue of readability. For example, if the original document is unreadable due to fading ink, water damage, mold, or some other issue, the highest possible resolution will not help restore the lost information. There are ways to capture information from damaged documents but that will be the subject of another installment of this series.

Another issue is that focusing solely on the number of pixels, dots-per-inch, or lines-per-inch ignores the reality of the way the digital image is going to be displayed and used. Computer monitor and TV resolution is measured in the number of pixels both horizontally and vertically across the screen. Interestingly, screen size is measured diagonally. Unless the screen is a perfect square the horizontal and vertical measurements will always be different.  For example, one of the highest resolution monitors available in 2019 has a resolution of 5120 × 2880. More common numbers such as an FHD (1920 x 1080) monitor has about a 2.07-megapixel resolution image per frame while 4K UHD has about an 8.3-megapixel resolution. No matter how high the resolution of your original digital image, your view of the original is limited by the resolution of your monitor. However, this limitation can be almost completely overcome by software that allows the user to zoom in on an image and view it at the maximum resolution available. The process of creating a zoomable image is quite complex and likely out of the range of things an individual without extensive programming experience is going to be able to use. However, there are some commercially available programs that provide a zoom function which usually requires multiple images at different resolutions.

It also makes little sense to consider higher and higher resolution for digitization when the upper limit of effective resolution is the human eye. The human eye captures images in almost 180 degrees in the entire field of view unless limited by damage to the eye. It is estimated that the total resolution of the human eye is about 576 megapixels. See "How Many Megapixels Is the Human Eye?". However, it is important to understand that of the total number of sensors in the human eye, only a small number are used to look at a document. For example, as I type on my computer, the computer screen only takes up a small percentage of the total scene available unless I am sitting with my nose against the screen. So the eye seldom uses all of its capacity to view objects.

It is better to ask the question about the eye's resolution in a different way. What is the effective resolution of the human eye when compared to dpi? The answer is probably close to the number expressed by this article: "What is the highest resolution humans can distinguish?"
The visual resolution of the human eye is about 1 arc minute. 
At a viewing distance of 20″, that translates to about 170 dpi (or pixels-per-inch / PPI), which equals a dot pitch of around 0.14 mm.  LCD monitors today have a dot pitch of .18mm to .24mm.
If the resolution of the digital image exceeds the visual resolution of the eye, then then the image can be further away and still appear to have the same detail. That is why you can see photos on billboards along the highway. Visual resolution is easy to understand when you are trying to read a street sign or some other printed notice at some distance.  When you are digitizing documents, you need to make sure that you match the digital resolution of your imaging device (camera or scanner) with the original size of the document. For a detailed explanation of this process see, "Printing and Scanning Resolution DPI / PPI Calculator."

If you are imaging a lot of documents or records of different sizes, then it is wise to have some excess of resolution. is presently using some digital cameras with 50 Megapixel sensors. The detail in the images created, using software, allows the user of the digital image to zoom in and see small details. For printing, there are several charts online that tell you the number of Megapixels you need to print at 300 dpi. Here is one example: "Megapixel Chart." A camera with a 50 Megapixel sensor could produce a printed image at 300 dpi of approximately 24 x 36 inches or poster size. If the original is 8 1/2 by 11 inches (standard US paper size) you would really only need about a 10 Megapixel sensor.

What does all this really mean? It means that to achieve archive quality images you have to understand and take into account all of the above complex relationships and much more. As I continue this series, I will be discussing other aspects of the digitization process including lighting, depth of field, and many other issues in more detail. After all, I am calling this series the Ultimate Digital Preservation Guide so the series will have to cover a lot of technical details.

Here are the previous posts in this series.

Part One:
Part Two:
Part Three:
Part Four:
Part Five:
Part Six:
Part Seven:
Part Eight:
Part Nine:
Part Ten:
Part Eleven:

No comments:

Post a Comment