The photo above was scanned from small, 3 x 5 inch photo. As you can see the original is badly creased and torn. After spending some time with Adobe Photoshop, here is the first pass at removing some of the most obvious defects: (Click on the images to get a full size view).
The original image was scanned on a Canon flatbed scanner at 300 (pixels per inch) and saved in TIFF format. I used the Healing Brush Tool and the Cloning Stamp Tool to erase the defects in the original. I adjusted the contrast and sharpened the image slightly. It is important to understand that all editing of digital images is destructive. The original scan has as much data as it is possible to obtain. You can try scanning at a higher pixel per inch but it is likely that the higher setting will not increase the quality of the output but will measurably increase the file size and the document size.
Photography dates from the early 1800s. The first permanent photograph was taken in 1826 by Joseph Nicephore Niepce of France. A variety of imaging systems were devised, including Daguerreotype, calotype, wet plate collodion, ambrotypes, tintypes or ferrotypes and finally in 1871, the dry plate process. In all of these imaging systems, the image is created by a chemical change in a coating on glass, metal or paper. If you magnify a photographic image enough, you can see the individual grains of the chemicals. The resolution of film depends on the size of the film used to record the image and sensitivity to light or speed of the film. The higher the film speed measured in a numerical scale (ISO or International Organization for Standardization) the larger the grains.
Digital images are fundamentally different than film. A digital image is created with a fixed number of rows and columns of pixel elements. It is inaccurate to talk about the "resolution" of a digital image because the size in number of pixels and sensitivity of a digital sensor is fixed at the time of manufacture. Digital imaging dates back to the invention of the charge-coupled device (CCD) that is the image sensor in many digital cameras in 1969. But the first commercially available camera using digital technology, the Sony Mavica, was only released in 1981. This camera had a 720,000 pixel image (.7 Megapixels). Technically, the Mavica or Magnetic Video Camera, was not a digital camera but a still analog version of the video cameras of the day.
Sensor technology has rapidly increased the number of image pixels on the sensors until today (2010) sensors of 10 Megapixels are common with expensive chips with much higher density. There have been attempts to develop different types of image sensors, including the Foveon sensor which collects information about red, green and blue light at every pixel or photo receptor site.
So when you make a digital image, the computer in the camera or scanner is recording the amount of light it receives from each pixel. The amount of light is stored as a series of numbers that can be viewed by having the computer reverse the process and send the light values to a screen or digital printer. Although there is a trend to increase the number of pixels available for storage, the quality of the image created is more dependent on the skill of the photographer and the way the image is used than on absolute pixel number. Here is an explanation of the relationship from Wikipedia:
Tune in for more.
The original image was scanned on a Canon flatbed scanner at 300 (pixels per inch) and saved in TIFF format. I used the Healing Brush Tool and the Cloning Stamp Tool to erase the defects in the original. I adjusted the contrast and sharpened the image slightly. It is important to understand that all editing of digital images is destructive. The original scan has as much data as it is possible to obtain. You can try scanning at a higher pixel per inch but it is likely that the higher setting will not increase the quality of the output but will measurably increase the file size and the document size.
Photography dates from the early 1800s. The first permanent photograph was taken in 1826 by Joseph Nicephore Niepce of France. A variety of imaging systems were devised, including Daguerreotype, calotype, wet plate collodion, ambrotypes, tintypes or ferrotypes and finally in 1871, the dry plate process. In all of these imaging systems, the image is created by a chemical change in a coating on glass, metal or paper. If you magnify a photographic image enough, you can see the individual grains of the chemicals. The resolution of film depends on the size of the film used to record the image and sensitivity to light or speed of the film. The higher the film speed measured in a numerical scale (ISO or International Organization for Standardization) the larger the grains.
Digital images are fundamentally different than film. A digital image is created with a fixed number of rows and columns of pixel elements. It is inaccurate to talk about the "resolution" of a digital image because the size in number of pixels and sensitivity of a digital sensor is fixed at the time of manufacture. Digital imaging dates back to the invention of the charge-coupled device (CCD) that is the image sensor in many digital cameras in 1969. But the first commercially available camera using digital technology, the Sony Mavica, was only released in 1981. This camera had a 720,000 pixel image (.7 Megapixels). Technically, the Mavica or Magnetic Video Camera, was not a digital camera but a still analog version of the video cameras of the day.
Sensor technology has rapidly increased the number of image pixels on the sensors until today (2010) sensors of 10 Megapixels are common with expensive chips with much higher density. There have been attempts to develop different types of image sensors, including the Foveon sensor which collects information about red, green and blue light at every pixel or photo receptor site.
So when you make a digital image, the computer in the camera or scanner is recording the amount of light it receives from each pixel. The amount of light is stored as a series of numbers that can be viewed by having the computer reverse the process and send the light values to a screen or digital printer. Although there is a trend to increase the number of pixels available for storage, the quality of the image created is more dependent on the skill of the photographer and the way the image is used than on absolute pixel number. Here is an explanation of the relationship from Wikipedia:
The term resolution is often used as a pixel count in digital imaging, even though American, Japanese, and international standards specify that it should not be so used, at least in the digital camera field. An image of N pixels high by M pixels wide can have any resolution less than N lines per picture height, or N TV lines. But when the pixel counts are referred to as resolution, the convention is to describe the pixel resolution with the set of two positive integer numbers, where the first number is the number of pixel columns (width) and the second is the number of pixel rows (height), for example as 640 by 480. Another popular convention is to cite resolution as the total number of pixels in the image, typically given as number of megapixels, which can be calculated by multiplying pixel columns by pixel rows and dividing by one million. Other conventions include describing pixels per length unit or pixels per area unit, such as pixels per inch or per square inch. None of these pixel resolutions are true resolutions, but they are widely referred to as such; they serve as upper bounds on image resolution.At the present time the optimal ppi or pixels per inch for most scanning is 300 dpi or ppi. Here is a good analysis of optimal scanning resolution.
Tune in for more.
This is good to know. I wish I had adobe. I have a lot of old photos that need care. I look forward to more.
ReplyDelete