January 2025 - Learning About Cameras
I'm not much of a picture person. Part of it is my conscious lack of social media, so there's no need for taking pictures or video, and part of it is the feeling like I would have to restructure significant parts of my life to care about either. Still, these ubiquitous things, built into every phone are still interesting, if only because I sometimes feel the need to avoid them in public.
Cameras have obviously evolved tremendously ever since their conception in form of the camera obscura, which is almost entirely a result of the quirks of optics. The idea of this was to project an image onto a white sheet in a dark chamber through a small hole, and where the earliest forms of this saw the application in copying the projection, what I find most interesting here is perhaps the chemical precursor to polaroid pictures.
Photography's short-lived precursor of photograms uses a very primitive version of the chemical idea of objects blocking light onto photosensitive material. In order to permanently fix the image though, would introduce a very important quatity to the topic: Exposure
It describes how much light which part of the image gets, and is controlled in practice by aperture and shutter speed, though of course the earliest attempts focused heavily on the latter. Shutter speed is the time the medium or sensor is exposed to the image projection. More precise pictures usually utilize faster shutter speeds, while longer ones are susceptible to forms of motion/subject blur. Slower shutter speed however captures much more light, and depending on the image capturing mechanism, this light might be necessary to capture the details of the image. For a sharp image, it's necessary for the camera not to move in relation to the subject.
Early attempts in fixing images onto photosensitive material often relied on silver-salts - which is more or less fine for black & white images. The idea of the camera obscure would still be used to expose the image to a medium, which was locked into a dark box, later enhanced through the use of a lens. I'll use "hv" for the photon energy, mainly for ease of typing, and since we're thinking about a small number of photons, it doesn't really make a difference that there might be frequency variations in the reaction. For silver-chloride,
AgCl + hv → Ag + Cl
with the components
Cl⁻ + hv → Cl + e⁻
Ag⁺ + e⁻ → Ag
Because silver-chloride is often of a darker colour, and the reaction comes out to silver, the resulting images are usually negatives. While other classes of materials have been tried in further iterations of this method, the concept would remain about the same, while the exposure times would reduce into more manageable ranges and the products would become more clearly defined. I'm skipping specifically over the daguerreotypes and methods that used materials like resin and tar (bitumen).
The methods after this are probably the ones that we're still culturally familiar with today, using silver-iodine, and needing to develop it after a relatively short exposure time. This daguerreoptype process was also the first photography that was publicly available. It would see a silver-plated copper plate as a base that would need to be fumed with mercury vapor. These pictures were still decidedly in need of airtight protection after development to avoid further chemical reaction. The silver is the operative component, rather than whatever base it's stuck on. Copper just used to be less expensive at the time. The use of mercury to fix the image is meant to apply a thin layer of mercury onto the surface of the plate, thus shielding the silver-chloride from further interaction with photons, hence the measures to avoid contact of other substances with the surface of the finished image. The daguerreotype would be superseded shortly by the collodion process, which used a glass plate coated with a thin layer of iodized collodion, dipped in a silver-nitrate solution. Exposure times were about a minute at the most, and when the image was done, it gave a negative that could be placed against a black (usually velvet) background. The Talbotype would be the first to transfer these experiments onto paper, and the stereoscope would graduate cameras from the camera obscura model into a mirror-lens approach that allowed for changing shot angles and would lay the base for zooming.
Analogue colour photography is the platonic ideal of photography in my mind. Not that I'd ever be patient enough to use it, but it's the process that probably interests me the most. Earlier colour-processes were very reminiscent of their monochrome predecessors, using chemically coated glass plates. The earliest variants would use coloured grains of starch randomly distributed on the plate with a silver halide emulsion on the plate, The light is selectively absorbed by the starch, which creates a negative. The image would have to be developed into a positive.
Kodachrome is perhaps the most iconic film. The latest iteration is the K-14 film, which consists of several layers of colour-sensitive filters. The top layer is blue-sensitive, followed by a yellow filter, then a blue-green and blue-red sensitive layer on an acetate support and finally a light absorbant antihalation backing to prevent halos or "lens flares" around light sources.
When this film goes into development, the backing is softened in an alkaline bath and carefully removed with a spray. Then follows a series of development steps, each separated by a washing step to remove the development agent, usually a liquid. Each development step aims to first turn the silver-halide crystals formed during the exposure time into metallic silver. The yellow filter is developed first, so it becomes opaque and the rest can be strategically treated with light. The Blue-Red sensitive layer is then re-exposed to red light through the acetate base, reacting with the remaining silver-halide. These crystals new will hold the dye formed by the cyan coupler and colour developer. After washing, the top layer is then treated with blue light, which is blocked by the now opaque filter from also exposing the blue-green sensitive layer. This layer will hold yellow dye. Now each of the coloured layers is opaque, but also all crystals are fully developed, so a chemical fogging can develop the remaining silver-halide in the blue-green sensitive layer, which will hold magenta dye. After another wash, the film is bleached, which oxidizes the silver back into silver-halide, which makes it transparent. The same essentially happens to the yellow filter. It does not show up in the visible spectrum after this. A fixer converts the silver-halide into soluble silver compounds, which can then be washed out of the film. Careful rinsing and drying prevents water-spots.
The production of K-14 film has been discontinued in 2011, so it's very unlikely I'll ever touch a roll of this film, or get a chance to work with it. The kind of film photography that I am very much still interested, and able to try would be polaroid photography. Modern polaroid uses some version of ISO instant film. These instant films incorporate the parts that the Kodachrome process would add during development into their own layer, so no manual steps for development are necessary. This makes for a much more complex layer structure. Behind the transparent top layer come the film emulsions, consisting of a layer of light-sensitive grains, developer layers with dye couplers beneath each colour layer, and then the developing agents consisting of a black base layer, the image layer, timing layer and acid layers. Upon exposure, the silver-halide grains do what they did in the Kodachrome. Inside the camera though, a chemical reagent is applied evenly across the center of the film, which dissolves the developer dyes to automate the Kodachrome process.
Digital photography is a huge departure from this basic process. Without getting in the weeds about the various controls of digital cameras, the interesting bit for this stretch is the function of the image sensor which captures the image into data.
Most devices with integrated cameras use "CMOS" active-pixels sensors. These will likely be found in any consumer-grade photography, including smartphone cameras. A CMOS sensor is basically a matrix of photodiodes with transistors to handle correct charge transfer, charge reset and inter-pixel interactions. There isn't a read-out for each pixel, so any signals need to first be multiplexed through the row and column index before being fed to a digital to analog converter chip. This means all pixels are read out serially, which is why many configurations include a rolling shutter. I have no computer science background, so I needed an explanation on the "multiplexing" part of the process. The process takes some number of parallel signals equal to an exponent of 2, and converts it to a single serial output. It does so using a truth table that the "vector" of signals is encoded into. Because most logic tables really only support two inputs before massive redundancies start cropping up, larger numbers of inputs are also usually filtered through selectors which in themselves operate on a truth-table logic.
There are slightly more "traditional" photosensors called "CCD" (charge-coupled device), which are in themselves an array of photo-capacitors. These are usually found in high-end broadcasting cameras. The underlying mechanisms are at home in the world of solid-state physics, which I have passing familiarity with. Each pixel therein is sourced from a p-doped semiconductor capacitor above the inversion threshold. Incoming photons create electron charges on the semiconductor-oxide interface, which transition into electron-hole pairs, which move toward their preferred ends of the material. The electrons tend toward the surface, the holes toward the substrate base. There is an about 5% rate of spontaneous electron-generation not triggered photonically, which tends to show up as noise in the image. This makes CCD sensors particularly susceptible to ionizing radiation as a source of noise. This is partially why high-speed cameras which until recently were built almost exclusively using CCD sensors were so susceptible to electro-magnetic fluxes in their vicinity, which could destroy or heavily distort the recorded images. CMOS sensors on the other hand function via photo-electric effect, which is less susceptible to interference by massive-particle inflow, but are prone to damage by high-energy laser pulses.