We live in a highly technological era, where only a tiny minority
(think: single-digit percentages or less) of the people understand any
one part of it enough to get paid for working in that field, and nobody
at all understands even half of the total. Most people understand turning
a wall switch on to get light in the room, and paying an electric bill,
and that those are somehow connected, but that is 100-year-old technology;
what drives the world today is far beyond it.
The same people can be taught to drive a car and shoot a gun, and to compose a letter on their home computer, and to put a frozen dinner into the microwave without any clue about how the microwaves make it hot, or the chemistry of taking fresh produce from the farmers and packaging it up so it tastes better than home preserves, or how pixels on the computer screen turn into ink on the paper, nor even the physics and chemistry of turning petroleum or nitrogen and cotton into forward motion, some of those older (but improved) technology, some more recent, but all of it completely opaque to the average person.
The literature of a culture tends to reflect the prevalent anxieties. During the Cold War, the literature (and movies) were largely nihilistic from the fear of nuclear holocaust ending it all, or (in the case of science fiction) post-apocalyptic because they cannot bring themselves to really believe the atheistic claim that there is no purpose life other than to "eat, drink and be merry for tomorrow we die." The opacity of modern technology has resulted in an upsurgence of fiction celebrating manual combat arts (fists and swords) and pre-industrial technology like steam engines, which are easily understood (steam-punk). It would appear that this turn to the rear affects even the technologists themselves, and I see a lot of interest in preserving (sometimes only in emulation) the simpler games and processors of the waning decades of the prior century.
Two outgrowths from this romance with the past appear to be the fixation
on unix-like operating systems (Linux and OSX) and
more recently "dark mode" computer and cell phone displays,
both as revivals of 50-year-old technology.
But the real world is not flat and linear like a unix file. In 1970 bulk storage was magnetic tape, and unix fit it perfectly. Already in the 1960s disk drives were being used, initially as linear files, but increasingly as random-access data structures. With increased computational power, computer displays were beginning to be adapted to two-dimensional graphics, and increasingly less for linear text. Smaller computers began to see these devices in the 1980s, when inexpensive personal computers opened the door to innovation by garage-shop entrepeneurs not tied to the traditional flat-file mentality. In 1984 the Macintosh introduced the novel event-based computational model, and the unixies have struggled to bolt this new paradigm onto their existing framework, but it doesn't fit.
The event-based operating system structure is more complicated than
the linear file-based model of the previous decade, and programmers trained
in the unix model have difficulty making sense of it. It has become the
high-technology Mount Olympus that causes lesser mortals to turn back to
the more understandable previous tech, so that Linux (the most widely available
version of Unix) becomes a beacon of hope for these pseudo-technologists
stuck in the middle between the user-not-maker masses who understand nothing
at all on the left, and the (to them) opaque complexity of event-driven
systems ahead and above them and the tiny cabal of priests who understand
it. The result is that the Macintosh operating system was killed and replaced
with a retro unix system only called "Macintosh" but unlike it in
every way but name, but the name is also falling into disuse in favor of
the more honest "OSX".
The computer was still far faster, and most of the output was temporary in nature, so they looked for, and found in the cathode ray tube (CRT), something that could display text as fast as the computer generated it. The earliest graphics devices worked like an oscilloscope, a single electron beam moved randomly around the screen by magnetic or electrostatic deflection hardware, and producing a bright dot wherever the beam struck the phosphor on the inside of the glass screen. Text was formed by tracing the lines of the shapes, but the process was slow and clumsy. About the same time TV came into wide usage. It used the same CRT technology, but ran the electron beam linearly across the screen in rows to form a "raster". To display text for computer output, the beam was turned on to place a dot there and off most of the time, and the characters were built up out of dots roughly in the shape of the letters. The beam resolution was not very good, and the individual dots smeared onto each other, which made the text more readable, but only as light (green) letters against a black background; small dark dots in a light background tended to disappear into the smear from adjacent white space, and besides, it was green (not white) because white phosphors were harder to make. The smear in TV sets was intentional, because it made the individual lines of the raster disappear into a uniform field of white or grey (black was the result of the electron beam being turned off, so raster lines were not a problem).
It was only after computer monitors (that were not made-over lower-resolution CRTs from TV sets) began to be sold in large numbers and the bright dot from the electron beam became tightly focussed enough to form smaller characters, that even the possibility of black characters on a white screen became feasible. Steve Jobs liked the idea of the computer screen as a metaphor for (white) paper, so his Macintosh displayed a white window representing the paper it modelled, with black text that actually looked (more or less) like the printed text on paper rather than the fixed-pitch computer fonts popular everywhere else -- so because the characters were generated inside the monitor, which was fed a unix-like linear stream of bytes. Memory was still too expensive to allocate a whole screen buffer of pixels (40 or more bits for each character) instead of text (6 or 8 bits per character). But memory was getting cheaper -- not yet cheap enough to store color pixels (originally 8-bit colors, but eventually 24/32 bits in modern full-color displays), but affordable in black on white on a small screen.
The black text on white background seemed more readable than white (or green) text on a black background, but subjective evaluations are arguable and scientific studies were commissioned to quantify the difference (I found numbers on Quora, but not actual citations). Clearly the eyestrain is minimal when the screen background matches the overall illumination of the room so the eye iris is not constantly opening and closing as the eye shifts, and most computer monitors are used in well-lit office environments. Reading books and papers is easier in bright light than dim, because the iris closes down resulting in better image focus on the retina, a principle that professional photographers using cameras with manual controls understand well. The result is that black on white computer output rapidly replaced the older technology in the marketplace.
When blue LEDs were invented in the 1980s, I expected active-LED monitors to replace CRTs, but I guess the production costs and defect distribution were still prohibitive, and liquid-crystal displays (LCD) instead took over. These are back-lit by flourescent tubes, but flourescent lights change brightness over time, both long-term and by the hour, and monitor brightness controls are no longer an easy manual rotary, so it's harder to keep them adjusted to match room brightness.
Recent advances in organic LEDs (OLED) seem to have overcome the production problems, so we are starting to see OLED screens, first on cell phones, where the fact that their blacks actually reduce power (like the old CRTs) and increase battery life makes them attractive, now even for laptops. This has recently contributed to the largely bogus argument that "black is green" and re-opened the debate, albeit largely without the supporting research. Obviously a few percentage points in extended battery life is worth something on a cell phone, but the actual savings in energy costs for desk computers is insignificant compared to the power consumed for gaming engines on these same computers, and there's no power saving at all using dark mode on an LCD display.
So why this rush for dark mode? If the vendors were concerned about eyestrain on the users of their phones, it would be far easier than the present 3- or 4-tap+scroll sequence of steps to switch from dark to light or back to match the ambient lighting. I think they recognize that the white background is in fact better for the eyes on the average, but they want to offer the dark mode for extended battery life to those who are willing to trade off near-term convenience for long-term eye health, and it's the teenage users who are promoting it (as a fad) like the resurgence of steam-punk and broadswords in fiction as a rebellion against their feelings of disempowerment by modern technology. The same backwards thinking leads other teens to take up smoking and other harmful chemical substances to (as the ancient King Lemuel put it, referring to alcohol) "forget their misery." It's sad that it's all they have, but it is their choice in a "free country" which encourages them to do that.
Tom Pittman
Rev. 2019 September 23