The first essay title captures the error: "AI Mistakes Are Very Different from Human Mistakes" (color emphasis theirs). The only possible "AI Mistake" is made by humans in supposing the output from a LLM engine has any meaning at all. The original words in the training material had meaning because they were written by humans who see words as representing ideas, but that connection was lost in -- nay, rather it never made it into -- the training. All the LLM engine ever saw was sequences of words (jumbles of letters), there never was any meaning associated with those words or sequences of letters, so the LLM engine never ever had any idea of what an idea or fact is, and no place in its program to put any such thing as an idea. Its output never had any connection to real-world objects and facts, so there is no possibility of error.
This difference is ontological, rooted in the very nature of the two different entities in view: Humans are first sensory engines, taking in touch and vision and smell and sound, and only secondarily a brain behind that attaching particular sounds and images with a concept of ideas associated with real-world objects and facts. None of that ever existed in the LLM engine, all it ever sees is sequences of letters, which it learns how to reproduce in acceptable sequences, and not in unacceptable sequences. There are no facts in its univeerse, it has no place to model facts, and no sensation of different sights and sounds and smells and feelings to put together a model of the universe as objects that exist and do things independent of itself. It doesn't even have a "self" to be independent from. It is nothing more than a processor of meaningless sequences of letters.
When we speak of "mistakes" we are referring to a discrepancy between the representation of facts and the facts themselves. A human communication has meaning because the words have meanings, they represent real-world objects and actions or attributes of those objects, or else imagined objects or actions that differ only slightly from their real-world counterparts, like a horse with wings like a bird, or with a single horn like a rhinoceros (but on the top of its head like a goat), or a combination of the head and shoulders of a man plus the body and legs of a horse, stuff like that which is imaginable because it is so like the real objects in the real world. The LLM engine cannot make mistakes like that because it has no internal representation of the real world at all. The only mistakes are humans seeing the similarity between what the LLM engine outputs and what we ourselves (and mostly other humans) would output as text or speech when describing the real world, and supposing the computer had a similar origin rooted in the real world.
Human beings are hard-wired (created) to want to attach meaning to random sensory perceptions, and to random splatters of ink on paper or pixels on a screen. It is a natural inclination, used when we teach a child the shape of the letters of the alphabet, and to "sound out" each letter to form words, and we show them pictures of real-world objects that those words represent, and we expose them to the real world objects represented by those pictures and words. The children learn to think in words, but never apart from the real-world objects denoted by those words.
At one time I was the USA delegate to an international committee, and the British delegate (our host) told a story over dinner about four guys from four different countries sitting around a dinner table as we were:
The Britt held up a fork and said "Take this fork. You Germans call it a 'gabel' and you Spanish call it 'tenedor' and you French call it 'forchette,' but we English call it a 'fork' because that is what it is!"We all were fluent in several languages, so the guy's parochial attitude was very funny, but it illustrates the ontological connection we make between words and the real-world objects they refer to. There is no such connection in the LLM engine, so what it says cannot have any meaning at all -- except in the mind of the reader. When the LLM engine "hallucinates" (emits a sentence that does not correspond to real-world facts), that this not a "mistake" nor an error nor a "hallucination" as we understand those words, but merely the computer doing what it was programmed to do, emit a sequence of letters and words that fits the probabilistic data it was trained on. The mistake is the human reading those words and supposing they mean what we humans understand those words to mean, when they actually have no meaning at all.
"Intelligence" is making connections between ideas that are not obvious. The LLM engine has no ideas to connect, so there is no intelligence at all in what is called "AI", only a superficial connection in the minds of the readers attaching a meaning to words that was totally absent from the processing done inside the computer. It's what we humans are hard-wired to do, and we do it when we see something resembling text created by an intelligent human, but it's not intelligent behavior behind that text, and we have no right to suppose it could be either "true" (an accurate representation of real-world facts) or a mistake. It's just a random jumble of letters filtered by probabilistic sequence rules, rules determined by examining billions of words scraped off the internet but totally disconnected from any semblance of meaning.
If the output from one of the LLM engines appears to be intelligent, it's only because its training is very good at reproducing sequences of words it has already seen, or (as the developers readily admit) slight variations from the training data. The original words in the training data was written by intelligent humans as a factual (or perhaps, but much less often, lying, but the computer has no way of knowing) representation of the facts of the real world. It's those slight variations -- recall that the LLM engine has no concept of fact nor objects in any such thing as "the real world" so it cannot lie or make a mistake or "hallucinate," it only reproduces slight variations on the training data, without any knowledge or understanding of any facts of the real world that these words may or may not correspond to -- those variations are seen by human readers as "hallucinations" or "mistakes" but they are not mistakes at all, just the computer doing what it was programmed to do. The original training data was written by humans to represent the facts of the real world (or not), but none of those facts made it into the training, only the sequences of letters making up (to the computer) meaningless words. Variations on meaningless words are no less (nor more) meaningless that the unvaried text.
So the LLM engine never makes any mistakes at all. The only mistakes are the otherwise intelligent people reading that output and pretending it is a representation of the real world which can be true or false. And that output is (intentionally) so close to what an intelligent person would -- and actually did, in most cases -- post on the internet, that it seems like the computer is intelligently describing the real world, when in fact it has no such idea in mind, and no mind to hold such an idea, only jumbles of letters filtered to resemble the intelligent human-written text in the training data.
The first of the three essays, the authors believe the developers' hype instead of doing their own analysis. Because it looks like human-written intelligent text, it must be so, nevermind that all the generated text so far resembles text that could exist among the billions of words scraped off the internet to become the LLM engine training data. It doesn't take much cleverness to devise a test that does not exist in that training data (see for example, my essay "A Turing Test to Defeat ChatGPT").
The second essay builds of Isaac Asimov's Three Laws of Robotics, which presupposes that the robots are capable of inferential thinking. If the only text the LLM engine ever generates is (slight variants of) the training data, the presupposition is false, and the best anyone can hope for is governmental regulation and a "Fourth Law" which itself presupposes that the AI engine does in fact have an internal model of the Real World, so that it makes sense of speak of "deception" as a functiion of the AI's inner working, and not merely PR from the developers.
The third essay is the least coherent, imagining that the word "intelligence" in both SETI (Search for Extra-Terestrial Intelligence) and AGI (man-made Artificial General Intelligence) gives them a common ground. It is nonsense, as the authors sort of admit. Fortunately, the developers of the LLM engine form of "AI" are going in the wrong direction, so AGI is not likely to happen in the next two or three decades, if ever. Perhaps it may become a problem for our grandchildren, who almost surely will not be reading any pontifications as old as today's essays.