Earlier this year / Later this year
The ChatGPT "Large Language Model" (LLM) is always described (as these guys did) as for a given sequence of words, what word is most likely to come next? That may generate fantstic modern poetry (which is intentionally meaningless), and it may make pretty good Shakespearean sonnets (see my "Machine Learning Is Still Religion" post three years ago) that uneducated people who have no idea what a sonnet is cannot distinguish from the real thing, but that's not how "AlphaGo" beat an expert human Go player, and that's not at all how anybody -- person or machine -- can write working programs. I watched some students try to do that kind of thing when I was teaching programming in high school these last couple years, but it didn't work, and when they realized they had to actually think about what they wanted the computer to do, they dropped the course. Thinking is hard work, even harder for computers than it is for people.
If these so-called "generative programming" programs work at all, they need to know something about the correlation between requirements and code, and how to declare and initialize variables that will be used later in the program -- that right there demolishes the "probable next word" myth -- and what kinds of program structures (conditionals, I/O, loops) are needed to accomplish what kinds of programming tasks, stuff like that. Or else have a vast memory of pre-written programs that it merely regurgitates unthinkingly when asked for a program that it learned the name or descriptor of. If I were trying to pull off this kind of hoax, that's how I would do it. Maybe the LLM actually has that ability built in for free, and maybe the implementors added a humongous back-end validator to make sure the generated program actually compiles and runs. I wouldn't know, but for sure the neural nets are not doing the analytical reasoning necessary to write a working computer program from scratch, they simply don't have that kind of horsepower.
Let's assume for the moment that the implementers really believe in the LLM version of unicorns and flying dragons. In other words, they are not intentionally deceiving us, but might actually be deceiving themselves. After all, they bought into the "frogs-to-princes" magical story of Darwinism ("it really happened that way, but took millions of years")... The LLM is learning to look for the probable best next word (token, that is, '=' counts as a separate word, even if there are no spaces, which all the compilers already do), and that gets you to something like the Deep-Speare sonnets: it sort of looks like code to a non-programmer, but it won't compile. Step #2 is to add the compiler errors back into the training loop, so it's not good enough if it won't compile. The LLM people don't tell us how long a string of words has to be to satisfy the probability criterion. I'm guessing it turns out to be the whole program, that is, the LLM running the show soon learns that if it reproduces the whole program it saw, it compiles and is rewarded, and anything less fails (which would be true). That got the developers some random whole program that compiles and runs. What they really wanted is is a program that meets the spec, so they added the description of each program into the text string being learned, then the training "succeeds" exactly when the description matches the input and the code compiles. Which is a recipe for the LLM memorizing whole programs with their descriptions, and storing that away in billions of numerical weights for the neural net. They claim they randomize different words in each "next word" decision, but that's a sure way to make the program not compile, so I would guess the learning process simply cancelled out all that randomness.
OK, it's not magic, but there definitely is some (self-)deception going on.
What the charlatans are selling us is no different from stage magicians making things appear out of -- or disappear into -- nowhere, mostly by deflecting our attention elsewhere than where the action is taking place. The real world does not work by magic, and neither do computer programs get written that way. If you ask ChatGPT for a program from a particular description, the one thing you can know for sure is it did not generate that code from your description. If you had access to all the millions (this article said "many billions or even trillions") of lines of online "open-source" code that they used to "train" ChatGPT, and if you could search that entire repository of code, you'd probably find a half-dozen or more programs with that description, one or more of which exactly matches what ChatGPT "generated" for you, at least close enough to qualify as copyright infringement. Nobody knows, because nobody ever actually looked. The search wouldn't take as long as the "training" did, but it would be a very long time and cost thousands of dollars in computer time. Probably a distant second to BlockChain, LLM AI may be one of the world's great contributors to new atmospheric carbon. If you care. The politicians who talk about it most certainly don't (see "Politics, Not Technology" three weeks ago). Eventually somebody will spend the money and sue and win, and that will be the end of ChatGPT. Or maybe (like Google's "gorillas" gaffe) they will just eliminate all the copyrighted code from the training data.
Since the ChatGPT never -- indeed it cannot -- generate any original code, it will never make programmers obsolete. Students in programming classes will use ChatGPT for a while, until the teacher asks them to modify it in class (no access to GPT) and they don't understand what they didn't write. Or the course final asks them to write a program longhand, no keyboards allowed. The students are not too stupid to realize using ChatGPT is a sure way to fail the class -- at least after a whole semester of students failed and have to take it over. Programmers writing real code for new situations will find that ChatGPT doesn't know how. After a while, using it will get you scorn and disapproval, which is almost as strong a deterrent as getting fired or flunking the class. And that will be the end of "computers writing programs" for a long time.
"Now you see it, now you don't" applies also to ChatGPT, not this year
or next, but in your lifetime. You read about it first here in my blog.
IEEE ComputingEdge mostly republishes articles from other, more specialized, IEEE journals. Today I was reading "A Paradigm Shift in Automating Software Engineering Tasks: Bots" by Ipek Ozkaya, who (Google Knows All) is the Editor of the journal in which it appeared last year. Like all females seeking standing in a generally male-dominate industry, she tends to be rather more accepting of the fairy tales people in the trade give her than is justified by the technology. So while the kinds of software "bots" (short for robots, programs that do tasks so well-understood that they can be programmed in a computer) that she may be familiar with can (and probably do) greatly enhance the productivity of programmers stuck with an error-prone programming language like C/C++, she eagerly embraces "ML/AI" as part of the future of such robots.
To her credit, she leaves the creative activities to the human programmers, but it is clear from her use of the term "AI" in this article, that she accepts the Religion (= "believing what you know ain't so") that the "AI" we see in the marketplace is in fact intelligent the way humans are intelligent. When you look at what is actually going on in these "deep learning" systems, it becomes more apparent the facts support no such conclusion. The current issue of IEEESpectrum, which arrived in my mailbox a week later than ComputingEdge (otherwise I normally read it first) has an article urging extreme caution using "Large Language Model" deep learning for code generation. They give reasons, but don't exactly tell us what is happening.
Here's how it works: Neural Nets (NNs) are nothing more than a vast number of "neurons" arranged in layers, where each neuron has a single output distributed to the inputs of every neuron in the next layer, where it is multiplied by an adjustable weighting factor (fractional number) and then added to the products of all the other outputs in the same layer with their own weighting factors, then applied to a non-linear step function, so if the sum is above a certain threshold, that neuron gives a much higher output than if the sum is below. The first layer, its inputs are the pixels in an image, or the words in a body of text, or whatever else the NN is supposed to be learning. The last layer, its outputs are a set of answers, yes or no, to a predetermined set of questions, like "Is this a cat?" or "Is this a dog?" or "Is this a Shakespearean sonnet (poem)?" -- whatever it is the NN is supposed to be learning. The proponents of this technology strongly believe in the Darwinian Religion taught in all public schools, that all manner of system complexity can and did come about by the accumulation of vast numbers of random events, selected by unthinking Nature for survival. Of course there is no scientific evidence for this supposition (see my essay "Biological Evolution"), it's just Religion. Accordingly, all the weighting factors in these NNs start out random, and are adjusted during the learning process by comparing the answers given by the machine to the correct answers, then feeding the errors back through the network to adjust the weights up (for correct results) or down (for wrong answers). It even works, sort of. When it doesn't they fiddle with the initial random numbers until it does. Nobody knows or bothers to find out how the internal neurons are making these decisions, it's Religion. Really.
Generative NNs like the Large Language Model used for AI "composing" text or code, are basically the same, except the input is a description of what you want, and the outputs are fed as input to a second NN that has been trained on images or text or whatever, so it has learned what is acceptable or not, then that is fed back into the generator, so it learns how to make something acceptable. Or rather, it keeps trying until something works. This uses tremendous amounts of computer power and energy, but nobody really cares about carbon in the atmosphere, do they?
I looked at the sample output in the Spectrum article, and it looks just like what a student teaching assistant might post in an on-line syllabus to help the students in the class the TA is supposed to be helping. In other words, the computer was trained to know "these words go together and this is what the human wants to see" because these words did go together when a real human posted it online to answer the specific question now being asked. Except the "generated" program had several errors such as you might expect from a novice student programmer. The computer does not know how to program a computer, it merely spits out what it already saw in the (unsupervised) training data, built from parts that were linked to words resembling the question now being asked. Sort of like going to StackOverflow and Googling your question. Most of the previously posted StackOverflow answers are helpful, and some have bugs that other posters may (or may not) call attention to, but the AI program does not understand any of this, it's just programmed to regurgitate some mild variation of code it has seen during training.
Which brings me to the subject line of today's posting. Three paragraphs from the end of her ComputingEdge article, the author announces "We will likely see a head-spinning pace of bots and AI-augmented tools in the next decade to support software development..." then refers specifically to one of those generative programs, GitHub Copilot, and admits "the quality may not necessarily be always as expected." Right on both counts. She ends the paragraph "This balance will improve in time."
I am reminded of the first few years after the Mac came out. The only
programming language available was Apple's own Pascal compiler, which as
I noted elsewhere, had a better chance of producing the best code than
any C compiler around. Eventually C compilers became available, and as
I expected, the magazine reviews all consistently rated the Pascal code
better on every metric cited -- except programmer preference -- then always
lamely added, "we expect C to overcome this..." After a few years they
stopped comparing C to Pascal, because it never got better. Or as Jerry
Pournelle might have put it, "Real Soon Now."
The next article in the same ComputingEdge issue promises "Deep Reinforcement Learning for Quantitative Trading." The "Deep" in the title tells you everything you need to know. Considering these guys are a professor and two of his students at an obscure third-world university, that isn't much. Let's start with the domain where they want "deep learning" to work. For most people, the stock market is a zero-sum game. All the profits anybody ever gets out of trading in stocks are matched by losses somebody else experienced. There is no other source of money in that business (see my essay "The Stock Market Is A Zero-Sum Game" 16 years ago).
An obvious consequence of that fact is that there cannot be a guaranteed
winning investment strategy. The proof of that is a variant of the mathematical
"diagonalization proof," if there were such a thing as a guaranteed winning
investment strategy, and every investor followed those rules, then there
would be no losers, which is impossible in a zero-sum game. More intuitively,
if everybody followed the same strategy, it would change the rules. Some
investors get rich in the market because a great many others do not invest
wisely, and lose their wealth (to the winners). The purpose of "Quants"
(quantitative trading) is to run the analysis faster than the suckers whose
money you will win. Technically all the information is available to all
investors equally -- "insider trading" is illegal on the American exchanges,
because it gives the "insiders" an unfair advantage over the losers --
but the fellow who knows how to exploit the new (public) knowledge faster
still has an unfair advantage. This is what the Quants do for their investors,
and this is what these guys in Singapore hope to achieve with their AI-infused
Quants. The only way they can win is by continually reprogramming their
computer to overcome the advantages their intended losers gain by using
the same methods. I wouldn't want that job. It may be fun trying, so long
as you believe you can win. Most gamblers go home after they lost their
wad. When I was a kid, they said that Reno is where you "arrive in a $20,000
sedan and leave in a $100,000 (slight hesitation for dramatic effect) Greyhound
bus." There's been some inflation since then, but the rules are unchanged:
Only the house/stockbroker wins.
However, and apart lack of appropriate credentials and the time to do the necessary research, I could not have written that paper myself. I'm not criticizing his focus, but I take a more wholistic view of Scripture as both necessarily and actually consistent with everything we see in the world, good and (by contrast) evil. I started with a "no opinion" clean slate, and let the data lead me to this conclusion. It does. In high school, despite my parents' best efforts to the contrary, I bought into the Darwinist Lie, supposing that "probably" evolution was the means of creation. Then a professor at the University of California invited me to "look at the evidence," and I did: I can no longer hold that opinion (see my essay "Biological Evolution: Did It Happen?"). I later looked more carefully at what exactly the Bible teaches, and concluded that either the Bible teaches a 144-hour Creation Week, or it cannot be trusted to teach anything at all -- which is inconsistent with everything we know about historical documents, including recent documents like the Declaration of Independence and the science and technology books that enable our technology. We cannot live that way. I cannot live that way (see my personal essay "What's Really Important").
In some 30+ years since this view of the world solidified for me, I have only seen it confirmed, never denied. I built a pretty robust "BS Detector" that does not always tell me when people are telling the truth, but it is pretty good at telling me when not to trust them. "Trust" is a value in this world and in our culture -- especially the evangelical culture I live in -- that is far too often undeserved. The Bible teaches us to "Put not your trust in princes" (people) but rather to trust God.
So why all this meandering prologue? Just this: I live in the real world, and I need help navigating my course through the real world, and abstract theological principles are useless without evidence that they are consistent with the real world I live in. Could all those Feminazis be Wrong? I cannot know unless and until I investigate the evidence. OK, Felix tells me that the Bible is pretty clear on the subject. I believe him. I actually thought so before reading his paper, because I read my Bible rather carefully. Perhaps the so-called "Biblical Feminists" might be correct and the Complementarians Wrong (in the Real World, as distinct from Biblical teaching)? I need to know. Those who claim to base their ethics on the Bible but do not believe the Bible when it disagrees with their (otherwise determined) positions are liars, deceiving themselves if nobody else. That takes care of the "Biblical Feminists" but what about the atheists (and crypto-atheists) who unashamedly prefer their own opinions over the Bible, could they be right? I need to know.
This takes me back to my Areopagus essay, What's Really Important." I can look at the Real World out there, and what do I see? Women really are different from men, and it's not just this culture, it's true in every culture all over the world. Everybody knows that men have stronger bodies than women, that's why there are separate men's and women's events in the Olympics. There is good scientific evidence that women's brains are wired up differently from men's -- but no right-thinking man in today's divisive "Politically Correct" culture would ever publish such a finding, all the research has women's names on it (they are not wrong, and nobody on the other side of the ideological fence dares to claim otherwise). Also there are gender differences in one (only one) of the MBTI temperament indicators.
Let me focus on two undeniable physical differences:
First and most obvious, a woman cannot -- it is physically impossible for a woman to -- perform coitus on an unwilling man. She can entice him, but if he does not want to, she cannot force him. The reverse (though he ought not to do it) is both possible and far more frequent than anybody wants to admit.
Second, and not insignificant, men have greater upper-body strength than women. Who starts barroom fights? Not women. She can beat on her guy, but only if he lets her; when he wants it to stop, he wraps his arms around her and it stops, whether she wants to or not. Who physically abuses their spouse? Not women, they cannot, men can always win those fights -- if they want to; pusilanimous or polite men might choose otherwise.
These two, together and alone, can and must have a profound effect on how a woman sees the world. Guys in leadership, if their subordinate refuses, he can physically force submission; women can't do that and everybody knows it. Sure, talking your opponent down is better, but if Hitler wants to make war, you must make war back; talk didn't and cannot work. When the Bible was written, guns and A-bombs didn't exist, but even today, if you don't want to kill them, then the guy has the physical advantage. Everybody knows that. She can cajole, she can persuade (the word "manipulate" often gets used here), but if the guy doesn't want to, she's out of luck. That results in a very different kind of leadership than a guy can do. Where and when women succeed -- or perhaps only appear to succeed -- in leadership, it is solely by persuasion: unruly men under her supervision must be subdued by men, which gives those enforcement men on her team an implied power over their leader. That is not true leadership in the usually understood sense.
That, all by itself, is why you should not put a woman into a leadership position over men. No Biblical instruction needed. Paul only put into words what everybody already knows. As Felix admits in his paper, Paul was solving a problem in a local church, but he did it in the same way he and John did elsewhere in the NT, by appealing to common knowledge in support of his argument. No, the Apostle did not use force to impose his instruction on Timothy, not only did he not need to (nor was it possible at that distance) but Jesus told his Disciples not be that kind of leader, but elsewhere Paul did tell church leaders to eject unruly people, and force might be required in those circumstances. Jesus used force to eject the moneychangers from the temple: imagine a woman attempting that! They would instead have ejected her.
I'm not saying that's the only difference, nor that what women can do is better or worse than what men can do, only that there is a fundamantal difference in what kind of leadership is possible. There is no equality here.
So while I applaud Dr.Felix and might (slightly) wish to be in his place, I would have written a different paper, one that also shows that the Bible is neither wrong nor irrelevant in the Real World, at least not in this topic.
I suppose it's a quote from Oppenheimer -- or maybe what the film maker only imagines that he said -- but it has two major problems, either of which completely nullifies the fear it engendered back before they "pushed the button."
THERE'S A SMALL POSSIBILITY THEY'RE GOING TO
DESTROY THE ENTIRE WORLD
AND YET THEY PUSH THE BUTTON.
First, and most important from my perspective, God is God, and He's in control, and He knows the end from the beginning, and I read the last chapter of the Book, and it doesn't end that way. True, we humans are stewards (not owners) of the earth, and we are responsible to God for what we do with His property, but God is not likely to allow willful and stupid -- and even evil -- creatures undo the "Good" universe He created. The same reasoning applies to "climate change" (see my essay "A Christian View of Climate Change" four years ago).
And that, ladies and gentlemen, is why the second reason works as it does. The scientific name for it is "entropy" and it's a one-way street. Anything that has extra energy tends to use it up over time. Now, if the world is only 6000 years old, there could be leftover energy laying around unused, but anybody seriously worried about the small possibility they're going to destroy the entire world when they push the button, all of them think the world is a million times older than that. There may be some residual energy trapped in the world from 6 billion years ago (assuming it's that old), but it's trapped. A few stray neutrons are not going to set off all the uranium ore in the world as a humongous earth-sized A-bomb, and there are no fissionable elements in any significant quantity anywhere near their first A-bomb test. Hydrogen bombs require a rare isotope, harder to find and isolate than the uranium. And -- here's where entropy kicks in -- the earth is constantly bombarded by radiation. If enough radiation in a few milliseconds of a bomb explosion could set anything off, then the cosmic radiation over 100 million billion times that long would have already done it.
OK, I read the rest of the rag...
Feature #3 is a woman writer interviewing a "woman of color" (Iranian) CEO of a company selling products mostly to women. She describes herself as "witchy" (pun probably intended). Her company has its own "glass ceiling" (75% women) and brags about giving them "access to reproductive rights" (by which she clearly means "non-reproductive rights") and makes it clear that her motives are as racist genocidal as Margaret Sanger ("Black and brown communities need it more than most of the rest"). I didn't see anything overtly political in this interview, but there is of course only one major political party in the USA trying to kill off babies, which (because it is possible) turns out to be preferentially by race and gender. In other words, genocide.
There's not much technology in this piece, her company has an app that got used during the interview. That's like saying she used a phone, which she obviously did but it doesn't say so. And late in the interview there is a brief mention of a corporate acquisition described with a half-word "tele-" to make it sound like they do whatever they do with technology. That's it, nothing else. I think I won't be buying this company's products any more, I don't like supporting their/her agenda.
Feature #4, a guy this time, but another person of color (author Steven
Levy didn't say so, he just mentioned the guy was born in India (and he
has a non-European name). Levy is a good tech writer, (unlike the women)
he asks the hard questions, but this is an interview, not a full tech coverage,
so we are stuck with the guy's answers, which are sometimes unsatisfactory...
For example, Microsoft is partnering with non-profit OpenAI to build what they hope will be "superintelligence." That presupposes that what they are doing it with is actually intelligent, that is, if a person did it we would call it smart. What we are seeing is fast recognition, the recognition of correlation between descriptors and existing text (or code) that some intelligent person wrote, but not intelligent composition. That's what OpenAI is doing, it's what GPT is doing, and it's what Microsoft is embedding into their systems. It will be an improved Google in the sense that Google only looks for individual keywords, and Bing will now recognize the composition of words as different from the individual words themselves, but that's nothing more than giving phrases the status of words. You don't need a neural net to do that, you can do it much faster on less hardware with some smart programming. OK, you can't expect Steven Levy to know that, he's a writer, not a doer -- remember, "Those who can, do; those who cannot, write about it." Levy does admit "We really don't know how these things work," and he's referring to the practitioners. The response was a frank admission that "These are not technical problems, they're more social-cultural considerations." After Bing (and Microsoft in general) starts serving up the best human creativity that is online (the internet is what OpenAI builds their training data from), then nobody will get any smarter unless and until people get smarter, that is smarter stuff starts showing up on the internet. But you know what happens when people gain access to smarter tools? They get dumber, not smarter.
What did that politician say 30 years ago? "It's the economy, stupid." Economists tell us that the nature of economics is to manage scarcity. That works in ways they don't even realize, like the wealth of the USA today comes not from being rich (which we are, compared to most people in the world), but by wanting to be richer without cheating people, essentially the Golden Rule. The Saudi royalty have far more wealth than most of us, but it did not make them innovators. Their cultural heritage got wealth by beheading people and some of them are still doing it (and the Saudis are paying for it). We have wealthy people too, but they got that way by building on the innovations created by people with far less money. So this guy has it all wrong when he says
I'm not at all worried about AGI showing up, or showing up fast. Great, right? That means 8 billion people have abundance. That's a fantastic world to live in.He came from one of the two most populous countries in the world, where the per-capita GDP is 1% of the USA, and anybody who wants to succeed there must work very hard at it. And he did. It helps that the resources are here, but his drive is far more important: 300+million people who were born here didn't succeed like he did. Even if his AGI (whatever that is) does give "abundance" to 8 billion people, they won't be any better off than they are today, except maybe some of them will drive faster cars and eat more meat, but the wage gap will be bigger than it is today, just like it's bigger today in the USA than it is in India. That's how the economics works. It's the social-cultural considerations (especially the Golden Rule heritage, which like the tide, lifts all boats). "To him who has, more shall be given."
Feature #7 is about another "person of color," mostly musing about the social conditions on Twitter, and imagining that his somewhat racist and political perspective will help him replace it. I found it difficult to tell which musings were the author and which were his subject. No tech to be seen. I am not on social media at all, so I'm not really qualified to address his claims, except to recall that Panasonic's VHS beat out the technically superior Sony BetaMax by allowing "not-nice" material on their media. It's not a pretty picture. People of color from countries other than India and China are too busy being (or mostly only feeling) victimized to have time to understand economic issues. Not my problem.
Feature #8 is a collection of seven pages, each profiling one woman allegedly involved in AI someplace, but mostly the captions have them not as practitioners, but fighting the practitioners. "Those who can, do; those who cannot, teach." All seven pictures feature a normal mostly frontal face view against a background of one or more other (mostly back-side) views of the same person, or at least it looks like it could be the same person, which background we are told is "AI-crafted" (but hardly seems remotely intelligent). Score one for a preview of what the "abundance" Microsoft is planning for us looks like.
The shorts at the front of the magazine (but listed in the third ToC
page) is unusual in the rather high number of male writers (three out of
ten or twelve). I generally read the mag front to back, but I can't say
my life is better for having spent the time.
Anyway, about the middle of my pacing run I hear this tiny chirp that sounds vaguely like the low-battery chirp on a smoke detector. So I stop and turn my head to get a fix on it (see "Nothing About Us Without Us" last year), and the sound is coming from my right, where my work room is. I step closer to that room and the sound disappears. I go back to the hall doorway and turn my head the other way. Now it's coming from the rear of the house. Back into the hall and it disappears again. I return to the doorway and now the sound is coming from the direction of the front door. There are no smoke detectors in that direction, they are all behind me in the rear of the house. Is it a live cricket? In my befuddlement I walked slowly through the doorway towards the front of the house, and the sound gets louder on the left, then disappears again. Back and forth a couple times, and I began to realize that the sound was reflecting off the painted doorway jamb. It was actually coming from the kitchen to my right. The kitchen faucet drips about once per second, exactly the same rate as the "chirp" sound I was hearing. Two reflections -- first off the kitchen cabinet, then off the door jamb in the next room -- altered the sound from a "ploop" to a much fainter "chirp." Problem solved.
A couple days later, again pacing back and forth, as I approached the rear bathroom I heard the sound of rushing water coming from the bathroom, and I wondered if I'd left the faucet running. I went to the door to see, and no, the faucet was off. The sound also disappeared. I stepped slowly backwards and the sound re-appeared, this time clearly from the wall panel to the left of the bathroom door, which was reflecting the sound of the box fan driving cooler air into my workroom. Again the double reflection altered the sound from a humming roar to a whoosh of river water. I spent much of my youth in an open house some 100 yards from a roaring river, I know the sound well. It's similar to heavy rain on a metal roof (same house, during a storm). You don't hear sounds like that in well-insulated American houses, and nobody lives that close to roaring rivers.
It's all part of my ntural curiosity, how things work. You must want
to know how things work before you can tell a computer how to make things
work. I think it helps give me a competitive edge over other, disinterested
OK, they are still trying to be goofy. It used to be that you couldn't find the page numbers. One issue they featured something Chinese, so they did the numbers for that article in (I suppose) Chinese digits. Then they used a stupid font that made zero look like eight, the way the Chinese idiots do with the numbers in telephones. At least you can read the page numbers this month. Instead they divided up the Contents page into three pages with facing ads. The first two are "Features," then the other stuff. I think it's less goofy than marketing: they can charge the advertizers more for pages that face editorial content, which is pretty sorry.
Feature #1: Christopher Nolan is a low-tech film maker doing a flick about a DWM (dead white male), one of the people responsible for the first atomic bomb. Some people worried that it would destroy the world. Obviously it didn't. Then for three or four decades people worried that the Cold War would do it. Ditto. Now most of the articles in WIRED worry instead about climate change. We're coming to that, but ditto. Nolan and the woman who interviewed him took a little time out to worry about AI. If I didn't know that God has everything under control AND that entropy won't allow self-programming robots any more than it allows self-programming DNA, I might worry about it too. But you see, I know how this tech works. Self-aware robots won't happen in my lifetime nor yours, and probably never. Stupid programmers releasing into the world unthinking robots that were not thoughtfully programmed to do good rather than harm is already happening (and real people get hurt) sometimes reported in WIRED. Of course they don't realize the implications of what they are saying when they report those gaffes. How can they? They are not tech people.
Feature #2 in ToC order: Virginia Heffernan, who formerly wrote the monthly editorial, interviews some policy wonk in Biden's Cabinet on "neoliberalism, masculinity, and Christianity." Her last feature piece did a jaw-drop on the influence of Christianity at TSMC, the Taiwanese Semiconductor Manufacturer (see "WIRED on Hardware" a couple months ago). Most people -- obviously including Heffernan -- cannot imagine Christian ethics making the world a better place (and therefore more profitable for the people doing it, as well as for everybody else). If this guy is an exception, she did not say so clearly. As for "masculinity," he admits to being gay. So much for that.
So what about "neoliberalism"? Near as I can tell, it's like the joke,
"A Republican is a Democrat who's been mugged." The word "liberal" means
open and accepting, but the people who bear the label tend to be knee-jerk
heterophobic anti-Christian left-wing bigots. OK, this guy admits to being
Episcopalian (same as I guessed of Heffernan two
months ago), which is about as far as you can get from the classic
Christian faith without losing the name. Neither Heffernan nor her subject
seem the least bit uncomfortable demonizing the half of the country who
voted for Trump. But this guy at least seems to realize that his politics
was fundamentally flawed, the Real World does not work that way, and the
"neo-" in his new label is an admission of that fact.
But under the make-over, he's still the same unthinking wonk: he talks "climate change" as if he imagines his job is (in part) to mitigate it, and yet he crows over new bridges -- making it easier for the American people to drive farther and faster and burn more fosil fuels to do so -- and brags about driving a muscle car and enjoying switching it into "power" mode, completely oblivious to the fact that every time he roars down the road under that high power, he is releasing unnecessary carbon into the atmosphere. So what if it's electric? If he ever charges it from a charger connected to the power grid, fossil fuel had to burn to make those electrons. That's because solar and wind and geothermal energy cannot fill all the electrical needs and desires of the nation, the difference is made up from oil and gas. The less he steps on his pedal, the less carbon goes into the atmosphere. Nobody -- especially, it would seem, the politicians currently in the White House -- cares enough about so-called "climate change" to give up their own money and comfort and pleasure to mitigate it. Even if they could. The difference between the "neoliberals" and the other half of the country is that we are not forcing the rest of the country to pay for what we ourselves don't want to pay for. They are hypocrites, the whole lot of them.
Skipping over a couple I have not read yet (see next week) -- mostly because I read front-to-back, which the ToC is not -- Feature #5 (first on the second ToC page), their nod to technology is the use of ground-penetrating radar, which does not help them at all in the main topic, which to criticize the church-run boarding schools for child abuse. The GPR did not find any bodies of children. They were soooo disappointed.
Feature #6 has not a hint of technology. It's all about some scammers who, like the two guys featured in WIRED a couple years ago (see "Operation Warp Speed for Climate"), figured out that when the government gives money away, taking the handout is easier than working for a living. This time it's more of a tax on the businesses that create wealth and prosperity, sort of like the wealth transfer tax on soft drinks here in Ore-gone -- they say it's a "refundable deposit" but the time it takes to redeem those cans exceeds the time you could earn the same money at minimum wage, so the only people willing to do it are the underemployed and unemployable -- anyway, so-called "carbon credits" are like ObamaCare: a tax paid to somebody who has less need for the money than the person paying it, except in this case -- the article says none of this -- the beneficiaries do nothing of any value at all, they just redefine some part of nature as using up carbon, and sell the label. They get unsuspecting third-world governments involved, and the government take is (so they said) 15% of the revenue, which can be spent on conservation or (they didn't say this) the government employee's Swiss bank account, or whatever. The other 85% is pure profit for the shysters who thunk this up, Swiss bank account not needed. And what qualifies as free carbon credits for the taking? Tropical rain forests and sea grass. These plants are already sucking all the CO2 out of the atmosphere they ever will, so selling their operation as credits can only enable dirty factories to legally spew more (not less) CO2 into the air.
The real kicker: they are also selling the existence of whales as carbon credits! What they don't tell you (and you cannot find on Google) is that each one of those whales breathes out as much CO2 every day as a hundred SUVs driven 30 miles each. You can find the numbers and do the math, but Google doesn't make it easy. The difference is of course that the SUVs burn fossil fuel while the whales eat krill that eat plankton, some of which make food from CO2. And you get back the rest of the carbon, what they ate but didn't burn off swiming, after they die. Yes, they sink to the bottom and decompose, and all that carbon eventually returns to the atmosphere. It's a steady stream coming and going, there is no net sequestering that anybody is telling us about. The whales don't do anything to reduce atmospheric carbon, they only add back to it what the sea plants in their food chain took out.
It was a guy who wrote this article, and unlike all the women authors, he spent the last 15% of his article asking the hard questions, including some of them I came up with reading it before I got there. He has no answers, and the people he asked seem to dodge the questions. But he asked. Just not about the whales.
The bottom line: "The fool and his money are soon parted." Remember
the housing bubble a while back? They were doing some math tricks with
mortgages that nobody understood. This is the same idea, except back then
there was at least some real property behind the money, but this has nothing
at all behind the curtain.
All three of these can best be described as technical disinformation, which is false ideas being promoted in service of an agenda that is itself contrary to fact.
Today I want to focus on the middle one, subtitled "how they broke a computer-science glass ceiling." This line was added by a feminist editor; there is nothing in the text about any "glass ceiling," and the very idea was invented decades after the events described in author Kathy Kleiman's book -- right: a female author, a female interviewer, and a female editor adding an inappropriate political agenda.
Let's start with the actual facts, which were public information long before this book was written: the first "computers" were women using desk calculators to compute the ballistics tables that gunners in the war needed to aim their big guns at enemy targets. There were no men among them, if there was a "glass ceiling" there at all, the women were the glass ceiling, because men were not invited. The fact is that women did the job better, and when you have a war to win, you hire the very best there is to do the job, nevermind what gender or race or sexual orientation or religion they happen to be when they are not working. We won the war, in part because we did that. We still do that, because many businesses treat their competition as "war by other means."
Later the engineers got around to building a hardware computer (ENIAC), and again the best qualified people to program it were the women who knew the math the ENIAC was to perform. Later computers required different skills, and the women no longer could compete. Or wanted to. Some can and do, but not many.
Not everybody thinks like a nation at war, not even every nation at war. Back when the USA was embroiled in Viet Nam, Israel got attacked by surrounding Arab countries. The Israeli military saw it coming and was prepared, but the Arab attackers were not thinking clearly -- obviously: Israel had (and has) God's Blessing, anybody fighting them is fighting God and cannot win. Israel mopped them up in six days. So the joke was, General Westmoreland (our guy running the Viet Nam war) asked Moshe Dayan (the Israeli general) how he did it, and his answer, "Well it helps if you are fighting Arabs." I heard similar stories about the Arabs in the Gulf War.
Anyway, with the possible exception of Israel, the United States is the most egalitarian country in the whole world and in all time. The (now former) Soviet Union was a close runner-up, but it did not prevent them from imploding. The real issue is, Can you do the job? A big part of that is, Do you even want to?
A few years ago I was sitting in the (Assistant?) Dean's office in the College of Engineering at Portland State University. He was bemoaning the fact that a couple years earlier they had 50% female entering students, but "now it's down to 25%." I knew immediately why that is. So-called "STEM" subjects require a focus on how the Real World works, and it helps if your top personal value is Truth, which it is for MBTI "Thinkers." The other end of the spectrum in that dimension, "Feelers" value affirmation (unconditional acceptance) higher. That would interfere with doing a good job in technology, but actually be helpful in, for example, a bank manager. There is a gender difference in that one dimension only, twice as many men are Thinkers as Feelers; women it's the other way around. Just on the basis of that statistic only, you should find twice as many men as women in tech careers, apart any discrimination. And (so I once read) in the former Soviet Union, they had about 30% females in tech industry. Look in any bank, you see twice as many women in the offices as men. They are doing what they are good at.
What happened at Portland State? Simple. Boys are second-class citizens in American public schools, all the feminist counselors are pushing girls into tech fields. They got there and discovered they didn't like doing that kind of work, it was too hard, so they told their younger sisters and friends, "Don't make that mistake!"
What's the big deal with STEM anyway? Nobody worries about a "glass ceiling" in other male-dominated industries like truck driving or garbage collection or plumbing. Why is that? STEM is more money, pure and simple. There's a reason for that, it's called "supply and demand." There aren't enough qualified people to do the work. If it was just a gender thing, pure prejudice against women, then more guys would fill the vacant jobs and the wages would go down. The fact is, guys don't want to do it either -- or cannot. The only people who do it well are nerds, and nobody wants to be a nerd, not even the guys who are nerds. They (we) are social misfits. Our brains are wired up to highly focused detail work that requires black&white absolutist thinking. Yes, anybody can do it, but it takes a lot of concentration and focus, and most people just don't want to. Not even for more money. Women can do it, if they want to, but it might be harder for them than for the guys. It's not a social activity. There is affirmation, but only for performance, never for who you are. Of all the students I taught programming to in the (now defunct) remote programming class, the very best student by a long shot was a girl. When I asked, she had already chosen some other subject for the following year. She could do it, but she didn't want to.
That, ladies and gentlemen, and not any such thing as the mythical "glass ceiling," is why STEM is dominated by males. Nobody wants to do it, but there are more guys willing and able to do it than girls. Supply and demand. ENIAC programmers was the other way around, and nobody complained about any glass ceiling. Women want affirmation, including the prestige that comes with a high-paying job; guys mostly just want to get the job done. It is getting the job done that earns the high pay, and women can have it -- if they want it.
Maybe there are a few sexist managers who want to hire women because they are women (and not because they do the best job), or men because they are men (for the same stupid reason), but they won't succeed in today's competitive environment, and who wants to work for a company that will fail? The same logic applies to any other discrimination unrelated to performance.
Speaking of non-functional performance, we have the IEEE,
the professional society that owns the Institute and its host rag
and they hire women writers and editors for their mags. Remember that line
I sometimes quote, "Those who can, do; those who cannot, teach"? The function
and purpose of publication is to teach (or entertain, but this is a professional
society), and all the people who are best able to do the work the society
exists to help them do, they are doing it, not writing about it. So they
hire the other people, the wannabes who can't quite make the grade on the
shop floor, as editors and reporters. Writing is something women generally
prefer to the headache tech jobs, so that's whom they hire. And most women,
their highest value, what they most care about, is giving and receiving
affirmation (not necessarily by actually doing any work to earn that affirmation,
and especially not doing work that may not result in affirmation), so they
push their employer to make "diversity" a part of their core ethics --
"diversity" is code for discriminatory hiring, specifically choosing to
hire people because of their race or gender or some other non-performance
reason -- so guess what you see in this month's
The first profile is a person from a third-world country, he says he's from Iran. Then a few not-quite technical pieces, followed by one on women. It finally ends on a person of color, a teenager -- they don't say so, but his family is from India, which is very curious, because India recently passed up China as the world's most populous country. Look at any gathering of tech people, (after Americans, which dominate for reasons other than skin color) the two largest demographics are Chinese and India. There's a reason for that. With national economies one or two orders behind the (front-runner) USA, they must work very hard to to get anywhere at all. Furthermore, at four times our population, they have a 4x better chance at overwhelming the American presence (all other things being equal, which of course they are not). The remaining 60+% of the world, their representation is so small as to be lost in the round-off error. It's not skin color.
So basically this kid has a tremendous advantage, mostly a family pushing him hard to succeed. He does not need our affirmation, he's going to earn it. Young people looking at his picture in the Institute are not going to say "He looks like me, I can do that!" The ones who look like him are already succeeding without any affirming help, and the non-India, non-Chinese, non-European origin people this profile was intended to help, they already know they don't look like him, and if they were to think hard about it, they'd also know they lack the cultural advantage. So much for the editor's obvious intentions.
What about his technical accomplishment? He's a kid, he lacks the experience and wealth of knowledge that comes from even going to college, let alone being out in the world and looking at stuff for a couple decades. So he got a patent? Big Whoop-de-Doo! Long ago, when the Supreme Court ordered the US Patent Office to issue patents for software innovations as well as hardware, the PTO chose not to build the technical expertise to determine if it is in fact patentable, but they just issue patents to all comers. Let's say his "app" actully works and accurately diagnoses foliate deficiency by looking at fingernails. So -- they didn't say this, women writers seldom ask the hard questions -- pregnant women get a one-shot notification of vitamin, calcium, zink, and other deficiencies. Did anybody bother to notice how fast fingernails grow?
Google Knows All. Fingernails grow about an eighth of an inch per month, and the growth is about a quarter inch under the cuticle, so after you start taking supplements, there's nothing to see for two months, and by the time this kid's app can see a difference, your baby is already born. So you buy this app -- or maybe it's free, the article didn't say, but everybody wants to "monetize" their product -- and start taking supplements, then ask the app again, "How am I doing?" it will tell you there's no progress, all those vitamins and minerals you paid good money for, they are completely worthless. Not really, it's the app that's worthless. But the kid had a lot of fun making it, and got good publicity for next time he wants to make a worthless app.
That's what happens when you value affirmation over facts.
As you probably know, I've been reading BAR for longer than I've been blogging. It's a wonderful source of information about the land of Israel in Bible times, mostly written by people who don't want to believe in the God of the Bible -- or at least they don't want us to know that they believe, which is almost the same thing if you believe Jesus knew what he was talking about -- but even the pagans and the atheists, deep in their hearts, they know there is such a thing as Moral Absolutes, and that Truth is one of them, so they tell us true things about the places where Moses and Jesus and Paul walked.
Hershel Shanks was a Jewish lawyer living in New York back when there were more Jews in New York City than there were in Israel. If I understand the facts correctly, he visited Israel and did the tourist thing, and was enthralled by the archaeological stuff. So he started a newsletter telling about the stuff he was learning. It bacame BAR. He was not a Christian, probably not even a religiously observant Jew, but as a descendant of Abraham, he (along with other Jews I have known) has the Blessing of Abraham, which we Christians only get second-hand, but even the "scraps that fall from the children's table" have blessed the Christians in America's heritage -- and through them all the USA -- far above the people from Ishmael, the other son of Abraham, and certainly above the people living in third-world countries like China and (North) Korea and India and sub-Sahara Africa, where they have no idea where the good things come from.
Anyway, like most of the Jews I have known -- perhaps all of them, but I'm not sure of that -- Hershel knew how to run a business, and he knew how to turn his hobby into a business, and how to find things to print what we Christians were willing to pay to read. No matter what your business, the way to be successful is to make products that people want to pay for. Hershel knew how to do that. It was awesome.
But people get old and eventually die. I myself can see my own end approaching. But being good at something, be it business or programming or teaching kids to program, is no guarantee that you are good at finding somebody else who can be good enough at the same task to take over when you are gone. In fact you probably cannot, because anybody good enough to do it like the founder has probably founded his own organization, and why would he want to leave that to run yours?
Not all of this was obvious to me when Hershel Shanks retired and left BAR in lesser hands, so I tried to be helpful, which usually isn't taken as helpful (even when it is). I wrote letters. Maybe they read them, but they didn't print them. Sometimes other people said similar things, and they did print those. It looked like they were disparaging one of them last year, so I tried to encourage them to see us as a class of paying customers they might want to pay attention to. Here's what I said last November, with the part they printed in red and the words they added in green:
You might want to pay more attention to reader Eva Best [Winter,p.6]Like I said, it wasn't written with print in mind, so I guess I can't fault their trim. At least they read it.
However much you-all might wish otherwise, the reason there is such an idea as "Biblical Archaeology" is the fact that most of the people who care enough about [biblical] archaeological history to spend their own money on it, they care only about Biblical Archaeology, not bogus stuff about the Bible apart from archaeology (which better quality is available elsewhere from editors less hostile to the Bible), [do] not [care] about [the] random archaeology of places not mentioned in the Bible, nor even a second-hand rehash[es] of other people's work [done] some years or decades or centuries before, but new (spade-in-the-ground) archaeology in Biblical places by the people doing it. Like Andre Lemaire this issue. 25% (one out of four) is better than 0% the previous issue, but about average over the last year or two (Shanks was closer to 75%).
Hershel Shanks knew or figured out how to find that stuff, and if he couldn't get the archaeologists themselves to write it up, he told us about it in his "First Person" column, which I always read first when BAR arrived.
If you cannot recover Hershel's focus, you will lose us. Already I cannot in good conscience recommend BAR to my friends and family. You don't need to take my word for it, ask your own ad department who your readers are. More than anybody else, BAR ads target Evangelical Christians (no booze, no smokes, no expensive cars). But act quickly, lest when our subscriptions expire we may stop renewing.
The person who takes over an institution like BAR when the Founder leaves, has a tough row to hoe. With print revenue falling all over, this is not a good time to alienate your core readership.
For more information (on content, targeted ads) see my blog post last year:
Grants Pass OR
PS, I read numerous magazines, and some of them, the editor wastes their valuable first page re-interpreting the rest of the rag. If I'm going to read the whole issue, I skip over the trailers at the front. When I only have time to be selective, the Table of Contents is a much faster and more informative way to pick out what to read *now* before setting the issue aside for later consideration. Have you considered filling that valuable first page with something of value, something I want to read because I won't see it anywhere else?
Other relevant blog posts:
Well then, let's not raise the minimum wage, let's cut the hours that people work for that wage. That's even worse. Instead of doing the same work for the higher wage, they are doing less. They get paid the same for doing less work, so the company that makes food these folks need to eat must hire more workers, and raise their prices even more to pay the higher taxes. So the prices go up, and these folks who thought they had more time to spend with their families, instead they must go get a second job just to pay the bills they previously paid on one salary, and that means double the commute time and more air pollution, more atmospheric carbon (if you care about such things: nevermind what they say about it, the government and the IEEE don't really care, or they wouldn't be considering stupid ideas like this).
Oh, and by the way, not everybody is harmed equally. The fat-cat business owners and stock speculators who make far more money than they need to live on, they will make a little less, but they won't notice it, they just have less money left over to stash in their tax-shelter foundations, and less for the government to tax when they die. The real losers are those on fixed income like pensions or the unemployed. Full disclosure: I'm one of those. Our costs go up like everybody else's, but our income -- in my case, none -- doesn't go up at all.
The only way to raise the standard of living for people is to enable
them to produce more on the money you pay them. Producing more product
for the same income creates wealth, and "the rising tide lifts all boats."
Everybody benefits. We in the US of A have the benefit of 500+ years of
people reading the Bible in our own language and (many of us) believing
it, so we work harder and make life better for other people, and the result
is that the USA is the richest country in the whole world and in all time.
Paying people more to do less goes in the wrong direction. It's a good
way to make all Americans (always excepting the fat cats) as poor as third-world
countries like Cuba and Venezuela and Zimbabwe and Russia.
WIRED comes out in the last week or two of the month, and the cover story is about some woman -- if it's a woman, that's deemed to be news, because all the editors are women, and women like to read and write about women as women (not necessarily because they did something significant, although that helps) while men are generally more interested in the significant stuff they did, but as my father told me, probably more than once, "Them what can, do; them what can't, teach," where writing in magazines is a form of teaching -- anyway this woman is pushing for deep-drilling (like they do for oil) but for heat to drive geothermal heat and electricity. We already do this in western states where volcanic activity brings that heat near the surface, but she wants to do it everywhere.
The problem is that oil flows through holes in the ground much more readily than heat. All the loose oil has already been sucked out, what's left is stuck inside the rock layers, which the drillers need to break into tiny chunks (called "fracking") so the oil can seep out and flow into the pipes that bring it to the surface. Fracking probably destabilizes the ground, which is one of the environmentalist criticisms (think: sink-holes). Heat doesn't move through rock any better than oil does, so fracking is probably still required, so to pump water down the pipe, and steam back up a second pipe. Besides, there's far less energy in the heat that's there, as compared to the oil (when it's there), so they need to drill a lot more holes in the ground and frack a lot more rock to get the same energy out. They didn't say that, it's physics, not geology. The drillers need to know geology, but somebody else needs to know how to convert crude oil into usable energy (chemistry at the refinery, physics is used for designing motors, not drilling wells).
The result is that the RoI (return on investment) just isn't there for geothermal. The investors can do the necessary math (otherwise they go broke), and (as quoted in this article) there is no RoI. What the money people know doesn't work, can still be solved by government, right? Jamie Beard thinks so. However, government wonks cannot change the laws of physics. So maybe she will persuade some idiot government wonk (think: Biden) to pay for expensive holes in the ground to pipe water down and up, and then it will quietly get capped over (or not) when (not if) it doesn't work.
As usual these last few years, almost everything in this rag fails to qualify as the title topic "wired" (meaning electronic, typically digital), the cover story included. One feature and one, maybe two short one-pagers, qualify. The guy whose column usually comes closest, isn't, except for a couple sentences naming the technology he used to archive his late father's poetry.
The feature that might qualify as digital spends most of its ink on the social aspects of various forensics people tracking down an assault on government computers and the companies purporting to protect them, almost nothing on the technology itself, certainly nothing on the root cause, which is that the security model in Unix is broken, and the entire internet is built on Unix technology.
The current IEEE Spectrum also has a feature
bemoaning three teenage boys shutting down portions of the internet. It's
the same problem, and it's not going away. The author pontificates that
the only cure the government has is to find and arrest the perps when they
show up, and to hope they don't publish their technology -- these kids
did, so now there are dozens of perpetrators all using the same mechanism,
dozens of kids to arrest and persuade to stop. Or put in jail, as they
did to John "Captain Crunch" Draper who did his jinx to the phone system
before there was such a thing as the internet. The phone company fixed
their hardware to make it impossible. You could do that to the internet
and ditto, but it ain't gonna happen.
The IEEE used to mail out a newsprint quarterly, Institute whose focus is the people, mostly women (the editorial staff is all female). Now they just give it some (18 this month) pages in Spectrum four times a year. The first profile article this month is a guy trying to "Stop the Spread of Disinformation" which he defines essentially as "people who disagree with me" except he doesn't use those words. He thinks the way to do this is to "help people monetize their data." I did not see in this write-up what it means to "monetize" data, nor how he expects that to "disincentivize" people from saying things they believe to be true on public forums, but it seems that the "disinformation" he is trying to eliminate is essentially the same as the "misinformation" that was targeted last year in ComputingEdge, except he clearly gives it a political definition, and he seems to think it causes people to be angry. Me, I think rock music makes people angry, and spreading disinformation is merely what they do when they are already angry for other reasons. Anyway, this guy is a hammer looking for a nail, a hardware guy with the silly opinion that more hardware will solve a moral problem (people doing Bad Things to other people) that he sees as an economic problem by making the failed political ideas of Karl Marx (whom he did not mention by name) somehow work. Ad-supported internet connectivity works because most of us are unwilling to pay for it up-front and too lazy to curate our own data. I think I said all that in "Web3" I posted a year ago.
Enough for today.
And then I got to thinking, "What's the point? Who is going to read this?" Some 30+ years ago I had a programmer working for me, and one of his duties was to keep a log of what he did every day. From time to time I would read it to see if what he was doing is valuable (worth what I was paying him) or maybe needed redirection. I thought the time he spent keeping that log worth what his time to do it at the $40/hour it cost me. During or shortly after that time I started journaling my own work, initially intended for the same kind of review, but I found I was not using it for that, I mostly did my self-analysis in my head on the fly. I still use my log to track major decisions and (more often) current bugs and (sometimes) their fixes, especially as my memory falters and I need to remind myself each morning what I'm working on. Is that a useful model for "Spiritual Disciplines"? I cannot imagine how that could be valuable.
Besides, I'm a forward-looking guy. Old people tend to talk about their past a lot more than their future, and maybe I do that a little, but I try to keep my thoughts in the future. My father often quoted Ph.3:13, and I kind of liked the sentiment. Still do.
Today, this morning, I realized that this weblog more or less serves the purpose of Spiritual journaling. I was contemplating a strange dream that came to me yesterday, wondering if it's worth mentioning here. Sometimes I vacillate between supposing that dreams might be significant (as in Bible times and in Muslim countries today, see my blog posts "Artificial Life, the Dream" and "The Dream" both 15 years ago) and the modern western notion that they are just subconscious exercises, "full of sound and fury and signifying nothing." I guess it was in Sunday School a couple weeks ago, the pastor/leader quoted some guru on a small number of rules to determine if a person should say what they are thinking. One of those principles struck me as particularly Biblical: "Is it beneficial [to the listener]?" That's really the essence of the Golden Rule that I now see as the centerpiece of all ethics and the Christian message taught all over the Bible, not just what you say, but everything you do and think. I couldn't think of any benefit anybody might derive from me retelling my silly dream. Most people talk about themselves because they want to be affirmed. They are Feelers, I am not.
I guess I could be more careful about making sure my blog posts are beneficial, but I think most of them are already. Or at least not harmful. All this came out of thinking about journaling, and the realization that this blog is as close to what they were recommending as I will ever come.
The sermon series at church is currently doing selected topics from Numbers and Deuteronomy, and his Father's Day sermon yesterday came as close to hard-selling Sabbath-keeping as anybody could get without actually making it a requirement for all Christians, no exceptions. I happen to agree. He could have said (but did not) that the Sabbath was created for us people thousands of years before any other Law of Moses. He did say that God rested on the Seventh Day, and (in effect) if it's Good for God, it must also be good for us. As far as I know, all senior pastors are Feeler/Judgers, and Feelers desperately want to affirm and be affirmed, and this guy is no exception, so he pulled back a little from a hard "one day off out of every six" and allowed splitting that one day off across two or more days. Me, *I* wouldn't want to be cutting things that close, but I don't have his job. Whatever.
Postscript, the "Spiritual Discipline" bookmark they put out for July was mostly a sermon rehash, except there was one line -- perhaps it works for most people, but I doubt it:
On your first sabbath day... don't plan anything. Just do whatever comes to your mind and heart at the moment...Umm, that looks like a line right out of Zen: "Empty your mind..." It's basically what I did before I started sabbath keeping, whatever I wanted. God's people did that too, "In those days Israel had no king; everyone did as he saw fit" [Judges 17:6 oNIV]. Most people (especially pastors) reading this line in the Bible understand it as Not A Good Thing.
The problem the research tried to address -- they began with a movie clip of a Hitler speech and the picture of the iron gate "Work Makes Free" (in German) from Auschwitz -- is how could ordinary people do such villainous torture? So they had this university professor invent a torture machine that would give painful electric shocks to an unseen victim, but the operator could hear his cries of pain. It was made out to be a "learning experiment" using ever increasing shocks (up to some 400 volts) to punish failure to learn spoken word pairs. Everybody understood and presumably agreed to participate, and they were asked whether they thought they would be able to continue giving shocks at the highest level. They showed three respondents, one said yes, one didn't think so, and then a black dude who said he was not into giving pain and wouldn't participate at all. Later in the flick somebody claimed that torture is something a "WASP" (White Anglo-Saxon Protestant) would do, not Jews or blacks who have been victims of torture.
Me, as soon as they mentioned the highest voltage, I thought to myself, I once got a shock of 400 volts and it threw me across the room, that could kill you. Besides, as regular readers of my blog know, I'm not into coercion at all, so I also I would have refused to participate. And I'm a WASP, very strongly on the "P" part. It's not a racist thing. Protestants have also been victims of torture in other parts of the world, and our own Mayflower Pilgrims (neither Jews nor blacks, all of them) came here to escape it. Torture is part of the effects of sin, and we are all sinners. *I* got tortured in Middle School (they called it "Junior High" back then), nothing like Protestants in China and North Korea and Saudi Arabia today, nor the Jews (and some Christians) in Auschwitz, but enough to "spoil my whole day." Kids do that to each other. If you believe the movies, blacks do that to each other. Gangsters do it all the time, whatever their race. It is the nature of sin that innocent people get hurt.
One of the fictional parts of this flick is that real professors don't run their own experiments, they get their grad students to do the work. I took psychology in college, and the fun part of that course was the experiments. Students in Psych 1A at Berkeley were required to be subjects in three experiments. I signed up for 30. They didn't say so, but each experiment was approved -- maybe even suggested -- by the professor in charge. I learned this later, when I was the grad student, and even later when I was the professor. Anyway, one of the requirements was that the grad student had to explain to the subject what they were studying, what they hoped to prove, but after the experiment was over (if the subject knows what they are testing, the experiment is ruined). In the flick, they called it "debriefing" and they didn't tell us viewers until the movie end that no shocks were ever given to the other guy, he just moaned and groaned and screamed in "pain" as an actor in his part of the experiment. Me, I figured out pretty early in the movie that was likely, because they were testing the person giving the shocks, not the guy receiving them. I even guessed (correctly) that they used a double-headed coin for the coin toss to decide who is "victim."
The big insight was that they thought maybe 2% of the subjects would go all the way to maximum torture without refusing, and it was actually 60%. In the final faculty hearing where they were considering punishing the professor for even doing this, he came back with reports that the French ran the same experiment with the same 60%, and the Germans also did it with 80% reaching max torture (in that order obviously for "dramatic effect," but I don't know if the stats are accurate or part of the fictionalization). The one subject who experienced extreme agony for just pushing the buttons and bolted from the experiment without receiving the debriefing, came in to testify that he could have been in VietNam, and it bothered him that he might could have participated in the My Lai incident and done it. I expect that part was fictional, this flick was copyright 1975, while Americans were still agonizing over VietNam, and this flick is part of the soul-searching that went on at that time.
In the final scene, the professor's lady friend -- I didn't catch if she was another faculty or not -- is accusing him of doing the same thing the experiment was doing to the subjects, that is subjecting them to the same kind torture they thought they were doing to the victim. He finally breaks down and weeps. At that instant I realised she also was doing what she accused him of -- see my "It Takes One to Know One" post -- subjecting him to torture resulting in emotional pain. Of course the movie never made that point. Screenwriters and fiction authors in general are (MBTI) Feelers like the lady friend doing this, they can't be expected to follow the data to its uncomfortable conclusion. Doctors sometimes must inflict pain to cause healing, and nobody faults them for that. She did the same thing, and became a hypocrite in the process. The professor was doing that to his experimental subjects in hopes of achieving scientific understanding of the process -- and we never would have seen that kind of sin raw in its face without such experiments, and (as he pointed out again and again) the subjects did agree to do it. The (fictional, and probably actual) subjects honestly thought they were advancing science, and (at least mostly) were not intentionally and sadistically torturing the victim. Maybe the Auschwitz torturers thought so too, but in their case the victims did not volunteer nor agree to it. That makes a difference. Otherwise, the border between these cases is pretty blurry.
The bottom line is still the Golden Rule, don't do to anybody what you wouldn't want done to yourself. Imagine yourself in that experiment. If hearing the screams of pain would cause you conscience problems to continue, then don't do that to other people. It's a moral absolute. My original take on it (refusing to participate) was spot-on. Race is irrelevant. Don't do that.
The flip side is "There but for the grace of God go I." We are all capable
of wicked sin, and only the grace of God keeps us back from it.
Earlier this year / Later
Complete Blog Index
Itty Bitty Computers home page