On the surface, this statement doesn’t make any sense. Scanners are cheap, and in 2021 some digital cameras are as good as scanners, too. Optical character recognition is so fast that it can happen in real time, leaving enough processor power for simultaneous translation to a different language. And I myself bragged about scanning almost 400 printed items and putting them up on the Internet Archive the other day.
But something happens to photos in particular when they get printed. Pixels only make sense on displays. To survive in real life, pixels have to get translated into halftones – complex patterns of translucent cyan, magenta, yellow, and black dots that overlap, vary in size, and are rotated at strange, yet deliberate angles.
Here’s a simulation of printing a photo with smaller and smaller halftone dots. When the dots are big, you can only get a sense of the photo by squinting. But as you make the dots smaller, squinting becomes unnecessary – the photo on paper looks exactly like it did onscreen even as, under the hood, it’s put together in a very different way.
Grab a magnifying lens and examine any photo printed in a newspaper or a book, and you will see a similar complex pattern of dots. But if you find such a printed photo and scan it, you don’t automatically get its pixels back. No, you get new, smaller pixels that replicate the exact nature of halftone dots. And with those confused pixels, you also get one more thing: a warm hug from something called moiré.