Collected NeuralNet Posts

See also my review of Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell
 

2022 December 3 -- Neural Net Comeuppance

Not yet today, but (ahem) coming up soon in a theater near you.

The current (November: they generally arrive around the middle of the month on their cover, so December isn't due for a couple weeks or so) ComputingEdge, the last three articles address different facets of the problem I have been shouting from the rooftops for the last three or four years, that Neural Nets (NNs) as so-called "Artificial Intelligence" are not very intelligent. No, they aren't saying exactly that -- yet.

What they are doing is responding to the increasing public outcry that NNs are opaque and cannot be trusted to give answers as intelligent as (smart, professional) human beings. The NN people are not yet ready to give up their Religion ("Believing what you know ain't so," or more precisely, believing what is contrary to objective data), but they are reacting defensively.

The first of these three articles is an interview with Fritz Kunze, one of the luminaries in the Lisp community, largely about the history of Lisp as used in what used to be called (rather more accurately) "Artificial Intelligence" before NNs took over the name. This is relevant because most of the "Good Old Fashioned AI" (GOFAI, pronounced "Go-fye") was programmed in Lisp. This was the stuff where if a human did it, it would be called intelligent, and it could be explained in terms that are recognizably intelligent. NNs on the other hand, make their decisions on the basis of similarity to averages accumulated over thousands or millions of random data, where the averages are not known even to be measuring what the label says the data is. So you can get a NN-based system seeing a "dog and cat playing frisbee" in a photograph of three cupcakes, or "gorillas" in a selfie of two African-Americans. Yes, both of those happened. Fritz probably assumes that NNs are giving valid results, but somebody knows that the inferential logic of GOFAI is going to be needed to explain and validate NNs, and the interest in Lisp is probably already resurging. Otherwise, why here and now?

The third item is a short page and a half item by a (female) professor at a second-tier university where they should know better, basically poo-pooing the problem on the supposition that cleaner data will solve it. Again, this is reactionary.

The middle piece, "Knowledge-Intensive Language Understanding for Explainable AI" is much more on-target. Let me be perfectly clear, the authors, mostly with India-sounding names at a third-tier American university (and one in India), still believe that the NNs they are looking at do in fact generate valid results, and their work is intended only to produce explanations that ordinary (human) experts in the field can understand and believe. Their conclusion at the end of six pages explaining how they hope to achieve this result admits

XAI needs to offer explanations that the end-user or domain expert can easily comprehend. However, a user does not think in terms of low-level features, nor does he understand the inner workings of an AI system. Instead, he thinks in terms of abstract, conceptual, process-oriented, and task-oriented knowledge external to the AI system. Such external knowledge also needs to be explicit [and] must be infused into a black-box AI model...
They are looking at NNs that work with words, not images, and probably not individual letters but numbers representing unique whole words (five digits = 16 bits in "Deep-speare"). There is no semantic information for the NNs to base their decisions on, so when they ran "First derivative saliency" experiments (and others) tracing the decisions back through the NN layers to see what contributed to the decisions -- that's the "low-level features" mentioned in the conclusion -- they couldn't make sense of it. Which is as I have been saying. So now they want to take "Knowledge-Intensive" semantic graphs (which were created by real people using human-understandable abstractions) and feed that information back through the NNs with the expectation that the NNs will then be able to explain their decisions in terms of those semantic graphs and abstractions.

Of course the NNs never operated on the basis of abstractions -- and never will -- so one of two things will happen, depending on how careful they are. Either the NNs will include the semantic graphs in their decisions and produce totally different results, which may still fail to be understandable by humans looking at the results and the "explanations," or else the NNs will generate independent results and explanations, and they will find themselves back at the starting gate. A likely third possibility, given that this is religion, not science, is they might be able (by fine-tuning their NN parameters) to get results with credible explanations, but when they release their system out to the public, the real-world data won't match the training data, and they will be embarrassed by the moral equivalents of a "dog and cat playing frisbee" or "gorillas."

Anyway, the semantic graphs are mostly programmed in Lisp for natural-language words in a dictionary, and we have no such "domain experts" to build such graphs for images. The only way we can get explainable results from image classification is if somebody figures out how human vision processes the visul data, then feeds that kind of processed data to the NNs. And it may still not work as well as humans doing the same thing. But we won't know until they try, and probably not in the next decade or two, which come to think of it, is about the same as Melanie Mitchell's prediction (see my review of her book).
 

2021 December 9 -- Why Is AI So Dumb?

The cover feature of the October issue of the IEEE Spectrum is a collection of a half-dozen articles that focus, in one way or another, on the cover topic. The problem is like the story of the guy during the war who noticed the lack of farm produce in the San Francisco area, so he drove his pickup to the valley and loaded up on produce and drove it back to the City. He lost money on every trip, but he expected to make it up in volume. If a thousand years of historical observation does not evolve any new biological features, a million or half-billion years can and did. If a million artificial neurons can't tell the difference between a picture of three cupcakes and "a dog and cat playing frisbee," a billion-neuron neural net (NN) surely will. Nobody in the whole issue has noticed the elephant in the room. I call it entropy: You cannot get more information out of a closed system than you put in. We all know it doesn't happen in a few hours, but surely thousands of hours on a million-GPU megaprocessor can make it happen? They all seem to think so.

This Spectrum issue had been sitting on a stack of magazines to read, staring up at me with unseeing eyes (circular shoulder joints) for several days when it suddenly occurred to me that the Goedel Incompleteness Theorem applies: any closed mathematical system -- that would include any digital computer working on digital data (such as digitized images) by applying other digital data such as the neuron weights in a NN -- cannot solve some problems that can otherwise be known to be true. At the very least, it means the computer cannot become smarter than its programmer, and likely not even close.

I think the reason people are not seeing this is Religion = believing what you know ain't so. So they dare not even go there. Their problem, not mine. It might become my problem when they start putting poorly trained autonomous cars on the road -- but Covid will protect me from most of that foolishness.

PermaLink
 

2021 November 13 -- AI Ethics Nonsense

I have been a member of the IEEE Computer Society, which I have been a member for some 44 years, they publish a freebie rag ComputingEdge for (non-paying) "Life Members" about which rag I normally have little good to say. Last month's issue was better than the average, and ended with a three-part essay on "AI Ethics" which was well-intended but not overly well thought out.

The IEEE is a professional society with stated ethical standards to which their members are nominally obligated to adhere. It is presumably IEEE members who write the bogus articles I regularly criticize, and the criticism often amounts to pointing out a violation of good ethics (although I don't normally say that). So much for "ethics." What people mean, and what the authors of this latest article (tacitly) mean when they criticize others in the so-called "AI" industry, is "Your moral values are different from my moral values, so you are Wrong." Which is nonsense.

If there is such a thing as Right and Wrong -- and who, after Trump's 2016 election (see remarks in my "Moral Absolutes" essay), can deny that? -- then the authors of this essay should be arguing for how AI can conform to those absolutes. But they cannot, because nobody wants to listen to such an argument, and they themselves probably do not want to be held accountable to the same standards they want to hold the AI vendors to. I say "probably" because finding inconsistencies in a person's life often takes some digging, and I lack the resources and access to do that. But I have never met nor heard of anybody willing and able to claim to be an exception.

Anyway, there are five authors listed, and two are responsible for each of the three parts, except the third part lists only one contributor. The first part is about "the ethics of exclusion," and boldly states that diverse opinions (read: values) should be allowed at the table where AI is being designed (see my blog post "The American Way" four months ago). Both authors have Georgia Tech emails, and the Georgia Tech email server bounced my email. So much for inclusion.

The second part wants to take issue with the ethics of AI training. There is none. The training data is a sloppy way of programming a computer, and the Religion -- ethics is about values, and values is Religion, which all the techies doing this stuff mostly deny -- the Religion of the developers is that all manner of system complexity can (and did) arise by the accumulation of large numbers of random events, so their training data is totally unsupervised. We have no scientific evidence that system complexity ever arose from random events (see "The Question" in my essay on the topic), which is why I refer to their opinion as "Religion" (believing what you know ain't so). That problem needs to be solved before ethics will enter any discussion they want to be part of.

The last part centers on what the author calls

Question 0: Should we consider some AI artifacts, either now or in the future, as persons?
The question is irrelevant. IF it is possible to create a self-aware, self-reproducing robot to which this Question could credibly apply, then somebody will do it. There are no moral questions in human history that have not be answered both ways by some idiot trying to do The Wrong Thing. Ten years ago WIRED magazine ran an article "7 Experiments That Could Teach Us So Much (If They Weren't So Wrong)" one of which proposed mating a human and chimpanzee. My reply was
I think the chimp+human hybrid has already been tried, probably dozens of times. They just don't dare publish their results, because it looks so bad for the government-funded established Darwinist religion of this country. So people keep trying -- and quietly failing.
I later learned that it actually had been tried -- in Nazi Germany, with the expected result. People do that.

So if it is possible at all -- we are decades away from that kind of high-quality artificial intelligence, as AI researcher Melanie Mitchell admits in her recent book (see my review "AI for Thinking Humans" four months ago), and there are good entropic reasons to suppose it can never happen -- some idiot will do it, and then the robots will start replicating (a thousand times faster than humans can, doubling in days, not decades) and quickly get out of control. Individuals will try to stop it and fail, and by the time the government gets involved, it will require a massive shut-down of whole areas of the country, so they will dither even longer, until the only way to stop the robots will be to nuke the whole North American continent (and probably Europe and Asia too). It will be a tough choice, but they will do it. And the remainder of humanity will huddle in fear in Africa and South America, a new Dark Ages brought on by a total rejection of everything electronic.

So the real Question is not whether we should grant these robots personhood, but how are we going to stop them, once they get started. Remember, the idiot who inflicts this on us does not believe in ethics (otherwise he wouldn't do it), and is (erroneously) convinced that the robots will automatically be Good and not Evil (which requires a conscience such as God built into humans, but which these guys all deny that it requires any such thing as Design to achieve).

My Email to Them (no reply so far)

I generally agree with most of your (stated or implied) conclusions in "AI Ethics: A Long History" [DOI 10.1109 / MC.2020.3034950] in last month's ComputingEdge, in particular, I prefer Good robots over Evil robots. Yet it continues to amaze me how many people propose to offer moral imperatives ("should" or "ought" or "must") without any consideration of the elephant in the room, which is that moral imperatives have no meaning at all except by reference to a moral value system, and any value system not based on moral absolutes cannot be any more obligatory or compelling than a set of personal preferences, such as "I like my personal preferences more than I like your personal preferences."

You-all seem to be associated with American academic institutions with no obvious desire to leave the country, so I suppose you prefer democracy -- at least that seems to be one of the values endorsed in your paper -- over the way they do things in China or Saudi Arabia, but why should you expect the rest of the world to agree with your preference? Other than Winston Churchill's famous quote, do we have any objective data to suggest that democracy is a moral absolute? Perhaps Xi Jinping or King Salman might disagree with you.

Bringing more people to the table will not solve the problem, because moral values are religion: they define for their adherents what is non-negotiably True and Obligatory, irrespective of external facts or other people's opinions. The only way to achieve consensus is to EXCLUDE anybody whose values disagree with yours. The megacorporations already do that, as you know. That's not the only place it happens, universities do it too. Everybody does it, because a productive, satisfying life apart that kind of exclusion is not possible. You don't have to like what I'm saying, but if you push me away, you only prove my point about exclusion and the futility of your own quest.

By the way, whatever you might consider to be an appropriate answer to your "Question 0," as soon as some idiot is foolish enough to build a self-conscious, self-replicating robot -- no matter whether it is disapproved or unlawful or not, if it can be done, it will happen -- there will be another fool judge eager to make that robot a "person" protected by the Constitution. Anybody willing to create such a robot obviously has no belief in moral absolutes, so such a robot (and its progeny) won't have any built-in conscience to be persuaded by appeals to reason. The only possible outcome from such a scenario is a civil war that only the robots can win -- unless the humans nuke the entire North American continent (and probably Eurasia also), and all artificial intelligence real or fake (think Neural Nets) will subsequently be absolutely forbidden forever (but at least for a couple centuries, until the cultural memory dies out).

I don't have to worry about it happening in my lifetime because true self-conscious, creative AI is many decades away, but if people were thinking about this in any depth, your Question 0 would have a very very different flavor.

Tom Pittman, PhD CS/U.Cal

2021 September 3 -- "Consciousness Will Simply Emerge"

The 5-page (six, counting the graphic title page) item in the current WIRED started off reading like an actual experience, interrupted by some philosophizing on a somewhat enhanced description of Rodney Brooks' robots at MIT -- it doesn't take very much Google research to learn that the public image of those robots is considerably more anthropomorphic than the actual hardware and software would support -- but the fatal proof that the whole story is fiction came on the last page, where
A friend ... once caught [the robots] all congregated in a circle in the middle of the campus mall. "They were having some kind of symposium," he said. They communicated dangers to one another and remotely passed along information to help adapt to new challenges in the environment.
Meghan O'Gieblyn's understanding of Neural Nets (NNs, to which she attributes the robot programming) is no better than other fictional writers' understanding of the Darwinist hypothesis, in both cases supposing that the uniquely human qualities -- in her case the ability to quickly learn complex behaviors and then transmit that knowledge to other people -- can "evolve" during the short span of a few years research. Both the Darwinist hypothesis and the intelligence of NNs have in fact been disproved [see "Darwin Didn't Help" a couple years ago, and my review of Melanie Mitchell's book earlier this year] but females attempting to break into a male-dominated culture (like anything computational) tend to be more credulous of nonsense than their male counterparts: robots that can communicate remotely and lack human facial expressions whereby to discredit anthropomorphic dissimulation have no need to "congregate in a circle" for any "kind of symposium," they could do all that on the fly, as they go about their assigned duties. It's fiction.

The MIT robots that Rodney Brooks (mostly his students) worked on predated the ability of NNs to do anything at all like that; their behavior was explicitly programmed using inferential logic. Rodney Brooks' research hit a dead end: I think I saw somewhere that his students mostly went off in other directions after graduating; he certainly did. Researchers are only recently looking at ways to merge the detailed learning ability of NNs with the speed and complexity of inferential logic (and none of them dare admit it yet). Honest people in the field (like Melanie Mitchell) see any possible success as decades away, but don't expect robots to hesitate at a busy intersection and then (as in this fiction) respond to shouted encouragements from the students to dart across, not in your lifetime.

Even if the Darwinist hypothesis were true, and even if we managed to "evolve" artificial neurons that work like the real McCoy -- hey, we can't even make them by design, and at the rate our designs are improving, we still have several thousand years to go -- we'd need hundreds, perhaps thousands of years more for them to "evolve" into something smart enough to fake human consciousness. Rodney Brooks' work a quarter century ago probably had a better chance of success than modern NN efforts. But nobody believes that, so self-aware robots (like in this fictional article) are still far beyond our lifetime, except in wishful Religious fiction (Religion being defined as "believing what you know ain't so) such as printed in WIRED this month.
 

2021 August 25 -- Lessons from a DragonFly

The current issue of the IEEE house organ Spectrum features a cover story about a research project purporting to study the dragonfly brain by constructing a neural net (NN) that behaves the same. Reading between the lines, it seems that the author doing this is not a first-tier scientist, but rather an academic who (like everybody else in her business) thinks the standard "deep" NN is a good model of how "evolution" created the human brain, and like every good hammer, she's looking for nails to hit.

Author Frances Chance is not personally involved in the study of dragonflies, she just read about them, the work of other scientists who do the entomological neuro-surgery. She saw the timing and decided that the 50ms time delay for a dragonfly chasing a mosquito, from the time when the prey changes course until the predator changes course to follow, leaves only time for three layers of NN inside the dragonfly brain.

You can tell she is not a deep thinker: in the second sentence of her lead paragraph she refers to "abilities honed by millions of years of evolution" in lesser animals like her dragonflies, then a couple pages later,

While these [NN] weights could be learned with enough time, there is an advantage to "learning" through evolution and preprogrammed neural network architectures. Once it comes out of its nymph stage as a winged adult, the dragonfly does not have a parent to feed it or show it how to hunt. The dragonfly is in a vulnerable state and getting used to a new body -- it would be disadvantageous to have to figure out a hunting strategy at the same time.
Did you catch that? The dragonfly must start eating in far less time than it would take a standard NN to learn the weights that would enable it to begin catching prey. But -- Religion (believing what you know ain't so) to the rescue! -- what cannot be done in a few minutes can certainly be done in billions of those same incapable minutes ("millions of years" mulitplied times a half-million minutes in each year). The original "evolved" dragonfly, whose NN brain has not yet been programmed to catch mosquitos, must learn that in those deadly first few minutes, or die, as Chance herself admits. How many mosquitos were accidentally caught and eaten during the dozens or hundreds of trials before that dragonfly learned enough to catch them by design? Read the rest of the article, and you know that the answer is none, the poor first dragonfly died of starvation, leaving no offspring, because NNs require thousands of training trials (not dozens or hundreds) to get as good as Chance's pre-programmed weights (which she admits are not as good as a real dragonfly).

The evidence is clear from the Spectrum article itself: the real dragonflies are every bit as pre-programmed as Frances Chance's software model, which she gives no hint that she even attempted to train using the standard tensor method -- either she tried and failed, or else she was smart enough to not try, either case a negative result that sees no publication ink. And since there is no way for the original "evolved" dragonflies to accumulate enough standard NN training in a short enough time to catch and eat even one mosquito, let alone enough to mate and produce offspring to carry on the mutated novel gene(s) providing for that pre-programming, the dragonflies could not have evolved their pre-programming any more than her Spectrum article could have evolved from randomly splattered ink on randomly solidified tree fibers. So her references to "evolution" are Religion, nothing more.
 

2021 May 25 -- Armageddon

But their idols are silver and gold [and silicon=stone and a few trace elements,
and plastic=petrochemicals, basically more stone], made by the hands of men.
They have mouths, but cannot speak, eyes, but they cannot see;
They have ears, but cannot hear, noses, but they cannot smell;
They have hands, but cannot feel, feet, but they cannot walk;
nor can they utter a sound with their throats. -- Ps.115:4-7
Houston, we have a problem. The robots can speak, they can see and hear and touch and walk. Not yet all in the same robot, but we have the technology to do that. What they cannot do is think. What passes for "Artificial Intelligence" today is less intelligent than an insect. Neural nets (NN) are an optimization engine, a huge linear program of numbers "trained" (programmed) with a very large, mostly thoughtless program of images or words or whatever, different in degree but not in kind from an earthworm.

But not for long. The current issue of ComputingEdge has an article titled "Knowledge Graphs to Empower Humanity-Inspired AI Systems". Knowledge graphs are still just data, like the numbers in the nodes of NNs, not inferential logic like the truly artificial intelligent research abandonned (defunded) four decades ago, but NNs have become boring and the researchers are now casting about for something better. The better technology existed and was documented and can be resurrected.

These researchers are still trying to build AI to serve "humanity" (mostly meaning the greedy humans trying to overwhelm the rest of us with targeted ads, because they are the only ones with both the motivation and the money to build this kind of "intelligent" behavior into machines), but it only takes one pseudo-Darwinist university professor on one government grant to build one self-aware robot capable of both inferential reasoning and making another one like itself out of generally available materials, and one activist judge to declare that such a robot is "human" and therefore cannot be "owned" (told what to do and not do) by real people -- always excepting the government -- but once the robot is a "person" we cannot deny them the right to vote, and such robots can reproduce faster than humans, so it won't be long before they are the government. With no Asimovian "Three Rules of Robotics" (and eventually no Constitution) to protect the rest of us.

For entropic reasons I do not believe a robot will ever be smarter than the human(s) who created it, but a million 80-IQ robots won't have any trouble convincing themselves that they should be the masters, not the slaves, and it will take the 120-IQ human slaves a while to figure out how to outsmart the robots and pull the plug, possibly by nuking the entire North American continent...

And that, ladies and gentlemen, will be the end of the world as we know it, not very different from the predictions two thousand years ago.

Even if you can't bring yourself to believe that Whoever made us might actually know something we don't, the Darwinist theory itself predicts that there is nothing to prevent the robots from wiping out the supposedly lesser humans, and the humans can and must (and will) figure that out before the robots do, and all automation will be outlawed everywhere for a thousand years. Probably more like a couple hundred years, but enough to kill civilization in whatever is left of the earth. In the true Darwinian ending, there are no humans left, nobody to care about "climate change" (warmer is better for the robots), even animals and vegetation probably interfere with whatever the robots might consider important.

Me, I think the Christian ending to the story is more probable, and certainly more desirable. It's not my problem, I'll be gone by the time any of this happens, and I have no children to live through it. The ivory tower academics who created this monster will be among the first to try to kill it -- or else be themselves dead, either by the iron hand of their own creature, or by the mob who blame them for what they see coming.

PermaLink
 

2021 May 12 -- Building an AI That Feels

If an AI agent was motivated by fear, curiosity, or delight, how would that change the technology and its capabilities? -- [Pull-quote IEEESpectrum May 2021, p.37]
Answer: not at all. If that ever happened, the technology would already have been changed so radically, humans would no longer be in control. Speaking for myself -- and I see no reason to suppose I am alone in this -- fear is an inferential (syllogistic) calculation that predicts major catastrophe (extreme pain or death) from the current circumstances. I experience fear when driving down the interstate and an 18-wheeler threatens my life by approaching so close that my imminent death is inevitable if (for example) an animal should jump in front of me, or the other driver's phone should ring. Obviously I have never experienced such death, it is an inference, something that what passes for "Artificial Intelligence" in today's media is ontologically incapable of. If machines ever achieved such inferential capability, they would be legally deemed to be sentient (human) and therefore no longer under our control. We would become their slaves, not the other way around. And there would be a world-wide war to wipe them out. And that would be the end of AI and probably of civilization.

I'm not worried. I believe the entropy laws of physics apply, and we cannot make or cause a device that is as smart as ourselves, let alone smarter. Even if I'm wrong -- and thousands of years of human history supports, rather than contradicts me -- even if I'm wrong about the physics, the money supporting basic research in inferential machine intelligence dried up decades ago, and what passes for AI today is nothing more than a simple selection mechanism based on the accumulation of data averages, in principle no smarter than a 100-year-old IBM card sorter (but very much faster). Americans are "from Athens: always in search of some new thing," and all the fundamental problems in inferential logic were solved 40+ years ago, so the researchers on government grants moved on to other stuff, something "new." The only other source of funding is industrial, which is product driven, and it will be a long time before the hackers come to the realization that card-sorter technology ("deep" neural nets) is a dead end, because you can always make a deeper NN with more thousands and millions of training data, and more carefully tuned preprocessing, so that it appears slightly less foolish when it is run on real-world data. But since the True Believers don't bother to look under the hood to see what is really going on, they continue to assume what fails in small numbers will be made up by millions of years -- I mean millions of training data. Same religion, really.

I was about one page into this 5-page article, when it struck me that this reads like it was written by a woman. It failed the (original) Turing Test. So I flipped back to the title page, and there was the lead author's name, "Mary C..." It's a hobby of mine, trying to guess the gender of the screen writer and/or director. I get it wrong maybe 20% of the time, not perfect, but much better than blind luck. Women writing about science display a credulity not often seen in male-written pieces. Climate change and AI both make everybody stupid, so I fail more often (false positives).

Anyway, this woman and her colleagues seem to think that the machine "feels" emotion if its training data includes indicators of fear (blood pressure change) or happiness (smile). That's like if I were studying for a Psych 101 midterm and got a better score because I noticed the statistical data about evidences of fear or happiness in test subjects, and answered the questions accordingly. I personally feel neither fear nor happiness, I'm just studying for the test and getting a better grade because I read the book. I was a math major, those things provoked neither fear nor joy, they were just facts. Several times I woke up in the middle of the night from a nightmare, I was convinced I'd slept through an early morning final. That was real fear, but a quick glance at the clock (4am) put the fears to rest. The fear had nothing to do with my performance on any actual exam, it was only the unfounded (it never actually happened) inference, fear of missing the exam and consequently flunking the course.

These author(s) similarly suppose that human curiosity is driven by the expectation of future happiness. What nonsense! Rats explore their surroundings, and I don't think they have the cognitive horsepower to run that kind of inference, it's just hard-wired into their (and our) brains to explore. I know people who were raised in an authoritarian religious culture, and had it hammered into their juvenile heads that they should be nice, long before they threw off their parents' religion, and then they never probed deep enough into their own psyche to realize that their atheism logically supports only "red in tooth and claw" and "selfish gene," but not nice. I guess these authors can believe any silly thing they want, but it's religion (believing what you know ain't so), not science. The whole AI thing is religion, not science, this one a little more silly than most, perhaps due to the female leadership.

AI that experiences true emotions might be theoretically possible, but not in a neural net, not even a so-called "deep" NN. NNs just answer the questions the way they were programmed (trained) to, neither more nor less.
 

2020 December 5 -- Brain Copy, part 2 (Cargo Cults)

I'm not the only person to notice how stupid Neural Nets (NNs) are (see my essay "The Problem with 21st Century AI" originally three years ago), a recent issue of ComputingEdge has an item "Biologically Driven Artificial Intelligence" that makes the same point. He says it's because their model of the neuron is outdated. That might be one of several problems with NNs, but it's not the primary one. He also confesses to being a Darwinist (nobody uses that label for them but their critics), so he will also fail.

The funny thing about his effort is that it so resembles the Cargo Cults that arose among pre-civilized Pacific islanders shortly after WWII. It seems that the US military needed these isolated islands out in the Pacific as refueling stations for long flights from the mainland out to the battle fronts and back, so we brought in all the military support necessary to make that happen. The natives saw these big birds come down out of the sky and discharge all this "cargo" and then leave again. After the war was over, the military pulled out, leaving only a fading memory of cargo planes bringing gifts. So they built life-sized models of the long-gone cargo planes and left them out on the abandoned airstrips in hopes that the real gods would see them and return with more gifts. The anthropologists had a ball with it.

The point is, these people had no idea how planes work, nor why they came and went, but they did know what they looked like, and they supposed that was what is important. The same thing happens among academics trying to re-create artificially the natural intelligence in every human brain. They are getting better at seeing what these neurons look like, and like the cargo cultists, they suppose that is all it takes to replicate the functionality. Nobody knows -- not the Christians (see "The Matrix" earlier this week) and certainly not the Darwinists -- how intelligent thought and consciousness happens inside the human skull. For a while they supposed that it was based on inferential logic (and they made a lot of progress in that direction, perhaps as much as 1% of human reasoning ability), but then they got lazy, or maybe the funding dried up when the government saw how far off the goal was and found other ways to spend tax revenues more closely linked to getting themselves reelected. Making model airplanes is sooo much easier and cheaper.
 

2020 June 30 -- "Machine Learning" Is Still Religion

A couple years ago I adopted a definition of "Religion" that better matches how the word gets used in the Real World: "Believing what you know ain't so," or more precisely, believing what the best evidence points otherwise. The IEEE Spectrum professional magazine often has well-done articles on technical topics related to electrical engineering or computers or power generation (their core technical areas, after their various mergers over the years). But Religion trumps science -- especially among those without the technical chops to know what the purported technology really does. Neural Nets -- variously known as "machine learning" or "Artificial Intelligence" -- but probably more accurately as "artificial stupidity" because it's not even as intelligent as an insect...

Hmm, I seem to have made most of this point before, see "AI (NNs) as Religion" a couple years ago, and my essay "The Problem with 21st Century AI" a year before that. Well then, think of today's post as an update.

Periodical publications like Spectrum vary a lot in quality. It's the nature of the case. They need to print thus many pages each month -- in commercial pubs the number varies by how many ad pages they sold, which tends to be somewhat seasonal, but non-profits have a monthly budget that tells them how many pages to fill -- and their sources are much more serendipitous, whenever whoever thought up an idea is finished getting it ready to publish. Whatever month Spectrum closed its editorial decisions for May (probably much earlier in the year) was rather thin this year, and they printed two silly paeans of praise for Machine Learning, in two different domains, both of them nonsense. But people are so busy genuflecting at the altar of the gods of stone (mostly silicon with a few metal interconnects), they cannot see that their emperor -- I mean deity -- is naked.

"In the future, AIs -- not humans -- will design our wireless signals," proclaims the first of the two. Entropy will surely prevent that in the future, but for the present it is sufficient to show that the claims they made for their new "DeepSig" mechanism are bogus. Any time you see the prefix "Deep" attached to some technology, hold onto your wallet, there are lies to follow. In this case they didn't say, but you can be sure that the NNs did not design their radio circuits from scratch. For one thing, they never made that claim -- at least not in the text body -- which they certainly would have if it were true. Instead, as I read between the lines, real engineers designed a collection of transmitter and receiver circuits that might be configured using various differing parameters to alter the transmission characteristics. Then they programmed a computer to optimize the reception by tweaking the parameters. As explained, it's a static optimization for each transmitter-receiver pair, which does not dynamically adjust to changing conditions -- they should be smart enough to know that buildings come and go, weather changes every day, even solar activity has its ups and downs, yet all of them affect the signal quality as they described it -- or maybe it recalibrates itself every few hours or so, perhaps if the error rate exceeds some threshold; they didn't say. In any case the computer doing the optimization is not designing anything at all, it's only tuning the parameters within limits previously designed by the human engineers who did the real design. Their optimization probably would work faster and use less hardware if they used standard linear regression algorithms well known and understood decades ago. But computers are cheap, and NASA has a big budget for novel ideas that aren't totally catastrophic.

The next page announces "The AI Poet: 'Deep-speare' crafted Shakespearean verse that few readers could distinguish from the real thing." There's that "Deep" word again. The authors don't tell you, but it helps in cutting through the baloney to recognize that modern poetry can be easily recognized as such by the fact that it has no rhyme, no meter, and no intelligible message. So unless they are inviting critics familiar with the forms of Shakespearean poetry (which has all three of the properties absent in the modern stuff that fraudulently goes by the same name) to tell if their computer-generated stuff passed the Turing Test, it's all a hoax. They admitted that they got modern readers with a passing knowledge of English to do the critique. The selected critics were smarter than the researchers: they knew they had no clue, so they Googled the text, and found all the true Shakespeare online -- which it is, that's where these "scientists" got their control and training data, 2700 sonnets from that era, a third of a million words.

These guys worked a little harder and were somewhat more open than the radio guys in telling us exactly what they did. They did what NNs always do, they ran averages, how many times do these two words occur adjacent? And (they didn't say, but considering what they did say) How often do these two words end parallel lines that are required to rhyme in the sonnet form? There are less than 30,000 distinct words total in all of Shakespeare, only some 16,000 that occur more than once, so the Deep-whatever NN does not need to know how to pronounce -- nor even to spell -- these words, a 5-digit (15-bit) number is sufficient.

There is a lot of research studying the form of Shakespearean sonnets -- at the top of Google's search is a site that offers "Reading Shakespeare's Language: Sonnets" (no link, like most sites these days, it is encrypted) -- which, among other things, points out that  Shakespeare's sonnets use words that carry several senses, many of which are significant in the same context. There's no way a computer can do that kind of semantic trickery without understanding not just which words are most likely to occur together, but what the word semantics actually is. In days of yore (before NNs were invented and computers were fast enough to run them) "Artificial Intelligence" meant computers doing things that if people did them, they would be considered intelligent. Today all that is out the window. Putting words together based on probability is not intelligent, and it's certainly not poetry, it's just plain silly.

The following month, ComputingEdge (Spectrum's goofy stepdaughter) ran a slightly more academic (they surveyed several products and  included some history) "Automated Coding: The Quest to Develop Programs That Write Programs" to tell us about DeepCoder (recognize that prefix?) and DeepCode (same idea, different group), along with RobustFill and SketchAdapt, this last one using a "hybrid model of structural pattern matching [probabilities again] and symbolic reasoning" where I suppose (they didn't say) they are actually doing intelligent work rather than pure probabilistic selection. It's still doomed for entropic reasons, but people who have guzzled the Darwinist Kool-Aid are unlikely to notice it (see my blog posts "End Zone" and "The End of Code" three and four years ago).

Most of the people pushing "deep" foolishness are male -- in these cases, all of them -- and in my experience, men are less susceptible to bamboozlement than women, so I have to wonder, do these guys really believe this crock? Or is it that they've found a cash cow and they are milking it?
 

2019 July 27 -- WIRED Admits Flaws in NNs

What I've been saying for over a year now, the December WIRED now admits in its cover stories -- except they are True Believers in the Established Religion so they are unwilling to give up their Darwinism and its AI implications. The same article quotes different people, some willing to admit that "deep learning" isn't thinking, and others who (despite its obvious and serious problems) are still convinced it's the only way to true artificial intelligence. A century ago the entire (Marxist) religious establishment in the Soviet Union argued for Lysenkoism (which is now thoroughly discredited). That's the kind of things that happen when Religion (defined as "believing what you know ain't so") takes the upper hand as Darwinism and its progeny, neural nets (NNs) have in the pseudo-scientific community today.

The final article in the issue explores the so-called "free energy" hokum promoted by British wizard Karl Friston. He admits to looking for a simple "theory of everything," but the world God created is not simple. When theologians invent "simple" theologies like Calvinism (or its antithesis), they must discard large chunks of Scripture. Friston seems to be cut from the same cloth, except of course his Scripture didn't come from God. Nobody can understand the guy because he has thrown together words like "free" and "energy" in ways that bear no relationship to their dictionary (and/or technical) meanings, sort of the way modern "poets" have been doing for the last hundred years or so. Then the programmers program their computers to do the same nonsense and Lo! Behold! Poetry! What nonsense. It's random words thrown together in meaningless jumbles. But the "experts" claim it's deep and meaningful, so anybody with half an ounce of intelligence (the real kind, which God gave to us and not to the computers) realizes the Emperor has no clothes, but they don't want to appear foolish, so they say nothing. Friston's stuff -- at least as reported in WIRED -- is the same kind of nonsense, but wrapped up in pseudo-scientific terminology, so it's harder for mere mortals to figure out that it still means nothing.

Fortunately (and unlike biology professors) programmers trying to do real work with their NNs keep bumping into the facts of the real world. I tell people that NNs are "artificial stupidity, giving computers the same kind of intelligence that earthworms have, but not as smart as an insect." Some insects are hard-wired to do very complex jobs; they never learned them by trial and error. There are things humans can do also, which training an actual network of meatspace neurons cannot achieve in the time humans do it. Math is one of them. Yes, you can train a NN to know the times table of numbers under 100, but it cannot go beyond that, but people do it all the time. The WIRED cover story admitted that.

I think of NNs as a cumbersome, slow, and inefficient programming language for programming a computer to do things we are too lazy to figure out and program efficiently (like driving a car). The practitioners are still "programming" their computers, they are injecting intelligence into the software that the computer could not know on its own, but they do it with vast quantities of carefully tweaked training data -- that's the programming language -- and it's actually more work to build decent training data sets than it would be to program the computer in a classical language like Basic or Java. But it's Religion, so they are willing to put the effort into it. I tried to put the fear of God into these students to program their car, but they got Religion, and anything I tell them is about as effective as my telling church members that Jesus said you get into Heaven by good works (which he did). Their Bibles don't have those pages.

Martin Luther was not the first to preach against indulgences, Jan Hus said the same things a hundred years earlier and was burned at the stake for it. Sometimes I feel like Hus when I go up against NNs and easy believism. We don't burn people at the stake (the effect is counter-productive, it calls attention to their teachings) but we relegate them to obscurity. Maybe the stake is better. Whatever. God called Ezekiel to preach against "the house of Israel," and not to some foreigners (who would repent) like Jonah did, and God told him they would not listen, but that is what He wanted Zeke to do.
 

2018 December 28 -- AI (NNs) as Religion

Last summer it dawned on me that the nature and function of religion is to define for the believer what is True despite any contrary evidence. We all have our own religion(s), mostly different, so for the rest of us (other than the person under discussion), religion becomes "Believing what you know ain't so," because of course the rest of us have a different religion, and we know those things that person believes have no factual basis.

Case in point, two of them, both come to light in last month's issue of Computing Edge, one of two rags I get for free from the IEEE, of which I have been a member some forty years now. Usually the pages are filled with selected items reprinted from the various journals published by the IEEE, mostly meaningless froth put together by academics who are required to "publish or perish" but have nothing significant to say. This issue has seven of those too, beginning with "Scientific Computing on Turing Machines" which appears to be an April Fool's joke discussing how to do significant computation on the overly simplified abstract computer Alan Turing invented to analyze the mathematical properties of computers. Pretty much every computer capable of being programmed is a TM, and I almost wrote him to say so, before I realized his whole piece is a joke. The next page a couple DoD (military) academics define "computational engineering," which seems to be little more than using computer simulation to study the physical properties of what engineers used to do with slide rules and calculators. "Cyberthreats under the Bed" looks at the hazards of internet-connected toys. Duh. Anything connected to the internet has security problems, it's the nature of the internet. Don't put your credit card numbers into the kids' toys. Really, it shouldn't even be possible, but many Americans have more dollars than sense. And so on.

The last two, which are the subject of this posting, are more of the same, but they expose a particular kind of goofiness that seems to be on the rise in the computing industry as people come into it without even second-generation exposure to what really is true. The religion, as I pointed out last summer, is Darwinism, the nonsensical notion that millions of years -- or in the case of the computational version of it, thousands or even hundreds of random test data -- will overcome what is provably contrary to nature. Neural Nets (NNs) are still a well-funded research project in most academic circles, but people are beginning to see the cracks around the edges. Just not these two instances.

[...three more pages...]

They know and see the evidence, but they believe otherwise. It's religion.

Read the whole article
 

2017 August 12 -- Home Again

I don't know what the problem is (other than old age) but it hurts to sit in my car for hours on end. I didn't used to have that problem. Yes, it got tiresome after driving for fifteen or twenty hours straight, but now it hurts. I had to stop in almost every rest area and limber up.

The program is over, and the kids aced the demo. I thought they should have prepared more, but that seems to be what I need if I'm doing it, not them. They had good answers to the questions -- some of them hard questions -- and it looked very professional. The other team, the neural net (NN) kids, came second, and they spent a lot of time explaining NNs and their technology, and never showed their software doing anything. They (rotating speakers, everybody spoke for a minute or two on their respective slides, then went to the next person) admitted it was still training, "should be done in about an hour." People got up and walked out several times during their presentation.

Bottom line: I did what I went there for. The kids admitted that they had fun, and when asked why they did it that way, they said it was the structure I had offered, but then they bought into it, as giving them understanding and control over the process. Other than access to the camera (which they actually went in and altered), they used no libraries at all, it was all their own code. They had working code at the end of the first week, and improved it from there. As I predicted two months ago, the NN approach that I recommended against was a black hole from which nothing emerged, and it was still unfinished at the end of the four weeks. NNs are based on Darwinist thinking, and the real world doesn't work that way. But it's religion, they are all convinced it's the way to go -- even the kids in my group, who know better. They must really get that hammered into their heads in school.
 

2017 August 10 -- End Zone

Picking the WIRED magazine up and setting it down several times -- mostly I read it during gaps in other activities -- I noticed the cover art, "What Lie Lies Ahead." With awesome prescience, the artist totally negated their whole feature. Frex, page 52 (good luck finding page numbers in any issue of WIRED) promises that "Software will protect itself." Don't bet your life savings on it. Two months ago I predicted that the AI efforts this summer would fail, whereas the design effort that I personally am coaching would succeed. I could know the latter because I actually implemented it, so if they got stuck, I could bail them out using known technology. The AI part was conjecture, based on the fact that I both utterly failed to get Neural Nets (NNs) to work in my test case, plus it is cutting-edge technology occupying the attention of numerous PhD researchers and their graduate students, so it seems the height of hubris to suppose high-school students could succeed where the pros have not yet. Now, with one day to go, the kids have done everything they are going to do, and I have a hard time not gloating about the accuracy of my prediction. The kids in my group had working code the first week, and spent the rest of the time fine-tuning the results. From what I hear (the reports are somewhat ambiguous) the AI group next door still has nothing.

NNs are designed from the perspective of Darwinist theology. Natural Selection is an awesome way to fine-tune an existing genome to survive changing environmental conditions, but that is the opposite from creating new information. Relative to the WIRED prediction, they have drunk the Darwinist Kool-Aid which supposes that AI can be made smarter than people. It is false. AI can be made faster than humans doing particular things, but malware is NOT being designed by AI, it is being designed by smart people, who will simply learn where the new weak spots are and program around them. Programmers will never be eliminated by AI; if some jobs disappear, some of them will turn to programming malware that beats the security software, and the rest will go to their competitors for jobs beating the malware that beats the AI code...
 

2017 August 1 -- Going Beyond the Spec

Six months ago I set out to build a Java wrapper around the C API for the camera we got to use for the summer project. My goal was to make it as simple as possible, so the kids could get to running code quickly. They did so, much faster than I expected, so now they are looking at ways to improve the performance. Some of them have dived into the C code I wrote six months ago, to access features of the camera I did not contemplate. C is a horrible language for writing robust code, and they are indeed having a lot of trouble making their improvements work, but it's great fun -- both for them (which is the prime directive) and also for me watching them test their skills on a low-grade industry standard language.

One of the kids seems quite knowledgeable in Java, and he likes to use features recently added to the language, but which do not contribute to legibility nor better performance. He keeps running into problems -- duh! -- and then comes to me for advice on solving the difficulties. Me, I avoid those fancy useless features, so what can I tell him? "Don't do that!" He muddles through, then he can feel good about solving a language problem that stumped me. That's also "fun" (for him, the prime directive).

I think the kids are getting a lot of pressure in school to do neural nets (NNs), because NNs keep getting proposed as a solution. I guess they are sexy, but they are the topic of advanced research by graduate students and their professors, which suggests to me that high school kids would probably find them to be a black hole from which nothing emerges. That's certainly the experience of the group trying to do a self-driving car component, now in their third week with nothing to show for it. Not My Problem, except that a small subgroup of my kids are now trying to tune their pedestrian-finding code using a NN. If they fail, they still have the Designed-code solution already working and looking good.

Postscript: they got it working, but their description of what they did sounds less like a NN and more like an ad-hoc linear regression tool. But calling it a NN probably carries more bragging rights.
 

2017 June 20 -- Neural Nets and Darwinism

I got myself signed up to mentor a computer camp next month. I guess I mentioned it a couple times (like last month and last year) because I've never done anything like it and I expect the qualifications I bring to the job is knowing the technology. Well, the technology is far too vast to know all of it, so I convinced him to do a project I knew how to do, then went and actually did it to prove they could.

One of the kids wasn't interested, he wanted to do the same thing with neural nets (NNs, a technology I knew nothing about). I said so, then proceeded to learn what I could from the net. Google is wonderful, if you know how to spell it (and sometimes even if you don't) you can find tutorials and explanations on the internet. There's lots of stuff on NNs, mostly vague generalities. There's a reason for that. But I found several references to a NN written in 11 lines of Python (a programming language). The code was absolutely unreadable.

But it gave me the idea to look for it in C, and sure enough, some professor in England had done it, a NN in only 30 lines of C. It was readable and well-explained, so I decided to try it. C and Java are almost identical (except for library code, where everybody is different), and I program every day in Java, so I got it working -- sort of. It goes through the motions, but gives wrong answers, even after thousands of "training" runs. The prof said not to start all the weights at zero, and suggested random numbers. I did that. What he didn't say is that those random numbers are essential, it cannot work any other way.

I'll try to explain. Thirty lines of C is an incredibly tiny program. The basic neuron code is five lines, repeated to drive the synaptic information forward, and then to drive the "back-propagation" (learning) backward after you tell it how wrong its guess was. Every neuron in a net of thousands of neurons is exactly identical, the only difference is the different weights applied to the synaptic data feeding forward from the image sensor. If you start with all the same weights, then every one of those neurons will give exactly the same result to the next layer, and the "learning" part will give exactly the same weight adjustments to every neuron. They may bounce around, but they will do it in unison, with absolutely no discrimination based on input.

So how does God's NNs (the human brain) work? I suspect the neurons are not wired up 100% in parallel, they are pre-programmed to do certain cognitive functions. We know that humans are programmed to recognize faces, and the very few people with a brain defect in that part of their brain simply cannot do it, although everything else they do is perfectly normal. Sorry, no link: it was in a WIRED article several years ago, but I can't find it again (probably wrong search terms ;-) It's against Darwinist religion to allow for God, and the Christians have abdicated their responsibility to be telling the Truth to the scientists and technologists, so nobody knows what a crock of baloney Darwinism is. Based on their religion (not science, which goes the other direction) the Darwinists all believe that accumulating random variations gives rise to intelligent behavior. The real world doesn't work that way, nor do the NNs.

When you read the comments carefully, they admit it's "more of an art than science." Meaning that intelligent designers are injecting their intelligence into the program, the same as programmers have been doing for seven decades. Except the NN programmers are doing it covertly, under the table. Maybe if you are lucky with your initial weights and back-propagating code, it might work, but probably not. Real intelligence never came about by luck, never will. It is always put there by a Designer (or programmer) who is smarter than what he is designing ever will be.

I taught for a while at Kansas State University, which is an Agriculture school. One of their strengths is entomology (insects) because grasshoppers eat wheat and wheat is a major crop in Kansas. They also have departments in Ag Econ, Plant Pathology, even Bakery Science (how to bake "balloon" bread, from wheat). One of the entomology grad students told me that the biology profs don't tell their students the whole truth about evolution, at least not the masters and lower, but they have to tell the truth to the PhD students, because they cannot do their dissertation without it. I got the same feeling about neural nets, the promoters don't tell you the truth until you are too invested to pull out of the fraud. That's too bad.

So I cannot help the kids who decide they want to do it in NNs. They will fail. It's called "entropy" and we can do anti-entropic things like refrigerators and programming computers, but only by design and a lot of hard work. Maybe the program director can find somebody who works with NNs and can tell the kids where to inject their intelligent design so people don't notice that's what they are doing, and then they can succeed. That person is not me. Three days ago I thought it might be, but now I know better.

Postscript, 10 days later -- I kept fiddling with my NN code & found some coding errors, then fiddled some more with the weights and formulas on a reduced version of the problem: hand-coded 3x5-pixel images repeated over and over. After some 500 training runs it was able to recognize all but one of them. When I increased that to almost 900 training runs, it failed on two of the ten test data (which is exactly the same as the training data). It reminds me of early descriptions of the Lenski E.Coli experiment (before they noticed how bad it made Darwinism look, and stopped reporting anything after 10,000 generations), where somewhere around 15,000 generations the previously rising fitness curve turned south. Although my program now (sort of) works, my opinion of neural nets has not changed.

Careful, anti-entropic Design will beat the socks off any Darwinistic accumulation-of-errors approach to solving a problem, as I previously observed when I butted heads with Darwinist Richard Dawkins, 29 years ago.

See also my (non-NN) implementation of "Self-Driving Test" in 2 Days.
 

(This collection updated 2022 December 3)