My area of specialization (PhD thesis) is compiler optimization, but in grad school they had me do a secondary focus (some schools call it a "minor") and I chose Artificial Intelligence (AI). The definition of AI, we were told, was "Machines doing things, that if a human did them, they would be called intelligent." That definition no longer applies to what they call AI today. Forty years ago, AI tried to do the kinds of things that made what we call "thinking" hard, and probably the hardest kind of thought is mathematical theorem proving. That problem is now solved, and we no longer see research on how to prove mathematical theorems, but only on what theorems next to prove, and they don't do this by anything we would call "intelligent" if a human did it. True artificial intelligence is dead.
Today, pretty much everything that gets called "Artificial Intelligence" consists in modelling what is thought to be the fundamental basis of human intelligence, which is the neuron. We know that there are well over a hundred billion neurons inside the human skull, and we know that neuron activity (as appears in MRI scans) is associated with thinking, but we can only guess at how that happens. These guesses are about the same level of understanding as where cellular biology was in 1859, when scientists knew about cells but imagined them to be undifferentiated blobs of protoplasm. Therefore it was easy for Charles Darwin to suppose that the gradual accumulation of tiny incremental increases in complexity could accommodate all manner of system complexity. Today molecular biology has a far deeper understanding of how cells work, and while many scientists still believe the neo-Darwinian hypothesis applied to other areas of science and applied to biology (and all of science) in general, none of them can show how their own specialization supports it (see my essay "Biological Evolution: Did It Happen?").
The Darwinian hypothesis has acquired Constitutional Establishment status in the USA, so it is hammered into the minds of impressionable young children in the public (and many private) schools, and every one of them grows up believing it's true. So they shape their important decisions around its erroneous trivialization of the Nature of Things.
In computer technology, this means that Agile software development -- the supposition that the gradual accumulation of small incremental increases in complexity can accommodate all manner of total software complexity -- is now the norm. Real programmers of course know it is hokum, but they can hide their genuine leaps of Design in some of the more vague and dark corners of the Agile methodology (for example the initial specification), or else they can call it "refactoring." The deception is productive -- Design (not evolution) is in fact the only way large software systems acquire their functional complexity -- so nobody has the temerity to expose the lie.
Neural Nets (NNs) is another computer technology built on the same lie. The fundamental (and only) mathematical basis underlying NNs is the supposition that all manner of system complexity can be understood as a multidimensional surface which is continuous and smooth (that is, it has continuous derivatives) and mostly convex around the solution loci. Finding any solution in that space consists in randomly choosing a starting point, then doing hill-climbing strategies to find the local maxima -- that's the gradual accumulation of tiny incremental increases in complexity -- which are defined to be the solutions. You know this is happening in the minds of people doing this, because all the descriptions explain it in terms of "tensor calculus" (the technical mathematical term for multidimensional vectors) which they explicitly use for the hill-climbing incrementalism, and at least one of the popular NN software packages has the word "tensor" as part of its name.
The problem is, the NN people are lazy. Because they accept on faith the Darwinian hypothesis that all manner of complexity came about by random tiny incremental increases, then (they suppose) it must be a Law of Nature, so if they design their NNs to work that way they must necessarily be able to solve all manner of problems, no matter how complex they might be, and they stop thinking about it. The Real World is not continuous and smooth, it is lumpy and full of holes and discontinuities. I watched a college student try to implement a NN to play a simple game. He had his math wrong, so I helped him solve the problem manually -- basically his game avatar needed to turn left or right based on whether the tangent of a certain angle is positive or negative. The problem is that the tangent function is discontinuous (it goes to infinity at +/- 90 degrees), so even after he understood the math, his NN still didn't work. He later told me he got it working, but I did not examine his code to see if he restricted his search space to the continuous region of the surface.
The Real World -- and like it, a lot of the software I write -- is in many instances Irreducibly Complex (IC). That is, there are a lot of complex systems that are completely non-functional if you remove any single critical component. There is no way for it to incrementally grow from a less-complex working system by small improvements. The biologists have been arguing against IC for a couple decades now, but mostly it is merely inconvenient for their "science" to deal with such problems. Nothing that ever came out of biology that has any real-world function -- think: medicine or genetic modification of crops to produce better yields or drought and pest resistance (which alleviates poverty and famine in many parts of the world) -- where the biological rubber hits the road, any pontificating about origins is irrelevant and sometimes counter-productive. Similarly, the reality that Agile software teams live in does not match their theory, but theory takes second place to delivering product, so they also survive. AI is more tightly bound up with the contradiction, and the lie is therefore more blatant.
There are problem spaces that are smooth and convex, and the Japanese have gotten very good at optimizing those problems, that is, they find the top of the hill. But mostly they don't create truly new products the way the Americans are famous for, because it takes a different way of thinking. Incrementalism works when you start near the solution and are looking for optimum parameters, but it fails utterly if you are nowhere near the solution when you start, that is, you are trying to invent something that is unlike anything that has been done before. Some of the smarter researchers are beginning to notice, see the postscript below.
This last summer I mentored some high school students in an "advanced programming workshop." I offered them a problem in computer vision to solve finding pedestrians in a live video feed from a windshield-mounted camera. The general case is an incredibly difficult problem, but a few simplifications (bright "neon" solid colors, pedestrians walking) brought it very much within their skill level. Some of them decided they wanted to solve the same problem using NNs. Even absent the Darwinian philosophical problems, it would have been a challenge far beyond their wildest hope of success: Huge companies like Intel and Google and Tesla are spending millions of dollars trying to solve the same problem (without the simplifications ;-) and PhD students and university faculty all over the country are spending what some might argue are the world's best minds on it; are high school students going to do in four weeks what those others have been working on for years and have not yet achieved?
So we proposed to the NN kids a simpler problem: based on the same camera feed (a driver's view of the road), decide how to steer the vehicle. It's not a hard problem to design a solution for (see "How to Steer a Car" below). I did a simple proof of concept: not having race track video nor access to the track, I downloaded the track map for a local go-kart track off the internet and wrote a program to calculate and display what the driver should see at any point on the map, then worked from that. I had it running in a day, and another couple days to turn the display into a video (watch it here). A neural net solution to the same problem needs to do the same thing, but that's not how these people think. They worked on it four weeks and failed. By comparison, a slightly larger group of kids working on the original problem using a Design methodology had it running in a week, and it worked as well as it took me a couple months to do the same thing on my own. These are smart kids. The NN team did not fail by being stupid.
Why did the NN team fail? You can look at their presentation at the end of the four weeks to see what they were thinking. Since their AI software was not producing usable results, they did a "song-and-dance" based on what they expected it to do, mostly images and ideas scraped off the internet. The basic NN they said (consistent with what I found on the internet) has several layers. The first layer is the pixels off the camera. The final layer is supposed to recognize (so the presenter said) a dog, but the intermediate layers might be looking for intermediate shapes like "triangles" (her word). Now I ask you, How is finding triangles in a raw pixel image going to help you recognize a dog? Or, for that matter, whether the road is too far to the left or right? If a NN is going to recognize dogs or cats or people or even a simple road, it needs to be seeing the component parts of whatever it's looking for, which is almost certainly not triangles. Maybe the perspective on the road ahead looks like a triangle, but being a triangle (or not) does not tell you which way to steer the car.
That's worth some additional comment. One of the internet sites I looked at, the guy was explaining how his NN was able to match a squiggly "S" curve separating the left and right halves of a scene of red and blue dots, red to the left of the line, blue to the right. Or something like that. After suitable training, his NN did that. He showed the intermediate results. The image neurons were getting weighted into the different middle-layer neurons according to their position and color. He didn't say what happens if he gave it a different shape separating the two colors. So his NN probably could find a particular triangle of a particular size and shape in a particular orientation, but what happens if you rotate the triangle? If you rotate it upside down, the NN might be unable to tell it apart from a star of David hexagram (which is two triangles superimposed, one inverted). How many different triangles do you need to train it on, before it recognizes "triangleness"? Or will it ever? The government (NIST) has a database of 60,000 handwritten numerals, all normalized into 28-pixel squares and properly tagged for training NNs. The original scans were black on white, but NNs require smooth data, so these images have all been blurred into a grayscale. With normalized data like that, I might could Design a program to look for regions of light and dark in particular parts of each square, and get pretty good results. I imagine that's basically what the NNs that are written to use this database do, like that guy's NN that found red and blue dots on either side of an "S" curve.
The author(s) of one older paper on training a NN to steer a van down the road took the time to analyze what the intermediate layer was actually seeing after multiple training runs. They showed grainy pictures that gradually resembled the road they were training it on. Does anybody try to understand what is going on any more today? I see no evidence of it. They have blind faith in the power of Darwinian incrementalism. I don't have that much faith, I want to see why. Maybe I won't understand everything that way, but I sure want to understand the technology that I build my life's career on. I did that. The kids in my Design group are well on their way to doing the same with their careers. The six kids in the NN group are in deep doo-doo. I wouldn't want to hire them to build anything I care about. Perhaps they will learn to be more circumspect about choosing solutions for their computational problems -- How? The educators aren't telling them! -- or (more likely) they will get hired to help program the autonomous car you are sharing the road with, or the telephone robot you are trying to get past before you can report that your house is on fire or that your mother fell down the stairs. Hello?
They tell me that all the autonomous vehicle projects are using NNs for just about everything. Let's suppose that the software development teams have actually figured out how to set the training parameters so that hill-climbing works and the NNs do in fact learn how to steer the car and keep it within its lane. It's a hard problem, rather harder than the Design solution (see "How to Steer a Car" below) as was amply demonstrated this last summer, but not impossible. Let's further suppose that they did the same thing with stop signs and oncoming vehicles.
They tried to do it with bicyclists, but as one of them admitted to a friend of mine, "when the cyclist stops, he disappears." You see, the NN is simply taking an average of all the bicycles, and they are all moving. That same company now has self-driving cars on the road, but I have not heard of any bicycles being hit; perhaps they improved their software (or maybe they don't test their cars in Portland). The kids this summer trying to get their NN to steer the (simulated) vehicle, found it did great on straightaway, but it couldn't do curves because they mostly trained it on straightaways. Do you see the problem? We do not teach human drivers the way these teams are training their NNs, "Here's keys, kid. Don't worry if you mash it up or kill a pedestrian, we'll get you another car and another pedestrian, it will probably take a thousand or more before you get the hang of it." I was in the car when one of these kids was being taught to drive by his father, and he didn't do it that way at all. "See that car there, you can come in closer, because he will ..." The young driver is being told to do theorem proving on the driving situation, implications based on what the other drivers are (presumably) thinking. NNs cannot do that. Maybe the autonomous car projects are working that into their systems, but they aren't saying so -- perhaps to keep their competitors in the dark, but more likely because they aren't doing it.
They will tell us, sooner or later. Before any responsible state government will permit a driverless vehicle on the road, they will want full disclosure on how it decides how to deal with unforeseen situations. Right now, the owner of the car is responsible to be alert and ready to take over. They won't be, and people will get killed, and one of the victims will get a high-powered lawyer who will take down a large automobile company the way they took down Pinto and "Unsafe at any speed" Corvair. When people see how foolish the training materials are, and how mechanical and UNintelligent the trained NNs driving those autonomous vehicles are, every one of those NN-controlled autonomous models will be forced off the road. The big-corporation lawyers already see this coming and are trying to do damage control (see my blog post "Robot Cars & Law" last year). The big corporations have more money than you and I, and their lawyers will win the first two or three cases. Then a celebrity -- or her child -- will get killed, and the engineers who programmed and trained the autonomous vehicle that killed the child will go to jail. And then there won't be any engineers willing to work on autonomous vehicles. Everybody will say "No way, I'M not going to jail for this."
There are entropic considerations -- you know, the physics that tells us Darwinian evolution cannot happen -- which suggest that we probably will never get a robot car as smart as a thoughtful human driver, but we almost certainly can get one smarter than most drivers -- you know, the ones who are stoned or drunk or texting or fighting with the kids in the back seat or talking on the phone, anything but paying attention to the road -- but Neural Nets will not get us there. If it happens, it will be because the AI industry has gone back to inferential logic.
NNs are where the money and the buzz is -- today. If you are in technology, do you want to be stuck in a backwater that nobody understands when everything hits the fan? Or do you want to be already leading the way out of the Slough of Despond? Think about what your NN is doing inside, how it makes its decisions: are they good inferences? It's not magic, either the NN will make good logical choices, or it will make bad decisions. Do you want to be the one they put in jail because the decisions were faulty? At least with inferential logic, you can defend in court the validity of the decision process. Think about it.
Tom Pittman
Revised 2017 October 6, 18 Aug 22, 19 Nov 28
Postscript. The second year the kids had a robot car following a track
(in simulation) at the end of the first week. By the end of the four weeks
they had a real car driving around a real track -- and even stopping for
stop signs. All designed code, all written by them.
Some time after I posted this essay, I happened to be reading an issue of the IEEE house organ Spectrum, which featured a follow-on to their "Robot Cars & Law" article last year. Titled "The Self-Driving Car's People Problem," author Rodney Brooks in non-technical terms (no mention of neural nets) makes essentially the same point I did here. In another issue of Spectrum an article "Making Medical AI Trustworthy" (the title tells it all) we find:
The model found that people with heart disease were less likely to die of pneumonia and confidently asserted that these patients were low risk... "The correlation the model found is true," Caruana said, "but if we used it to guide health care interventions, we'd actually be injuring -- and possibly killing -- some patients." -- p.9, Aug.2018
[no link, their website is encrypted and not open to the public, but Google can find it from the title, if you are so inclined]
A later issue of Spectrum had
an item titled "A Man-Machine Mind Meld for Quantum Computing" in which
the authors candidly admit
Although algorithms are wonderfully efficient at crawling to the top of a given mountain, finding good ways at searching through the broader landscape poses quite a challenge...and then go on to describe creating video games to get "crowd-sourced" people to do the heavy thinking in problems that their AI engines weren't doing very well.
More recently I ran across a relevant post in my Weblog "The
End of Code" that I had fogotten about.
According to WIRED (no link, it's now both encrypted
and behind a "paywall"), Amazon funds what they call the Alexa Challenge
(or something like that), a million dollar prize to the first AI robot
that can engage a human in small talk "chat" for 20 minutes. They repeat
every year, because nobody won. The top three contenders in the most recent
iteration, the NN-only entry finished third, behind "handcoded" (design)
and a combination of NNs+design.
More recently, an autonomous vehicle killed a bicyclist. They had a designated driver in the driver seat, but the video camera trained on his face showed he wasn't looking -- it's a self-driving car, right? No need to look! Except when the computer screws up, then it's too late. He will take the fall for that one fatality. Is anybody willing to take his place? Not if they're thinking clearly. One fatality is "operator error" (the vendor gets off free), but ten fatalities is a manufacturing defect. It will happen, unless you prohibit pedestrians and bicycles.
See also "Programming 'Deep' Neural Nets"
which explains why NNs cannot be nor become "intelligent."
The paved track is shown gray, and everything off-track is shown in green. The tan line down the middle is calculated by the software as the midpoint between the two edges. I wrapped a piece of "white tape" around the top middle of the steering wheel to make it easier to see when it is being steered (you can watch the video here). PatsAcres has white lines along the edges of the pavement, which are only hinted at in this re-creation by the sandy-pink dots.
Steering is easy: you take the average horizontal position of that tan line, and if it is to the left of the car center you steer to the left, or if to the right, steer right. If you badly over-steer or under-steer, the car will oscillate or run off the track, but otherwise it self-corrects and successfully drives around the track. You can see a little oscillation in the video due to the digital steering -- also I didn't try very hard to fine-tune the ratio.
Training a neural net to steer the vehicle is not much harder than training a NN to find a curved line separating two colored regions of the image. In fact, you could use this video as the training data by cropping off the steering wheel and feeding the position of the white tape back into the NN to back-propagate its error calculation. It's a convex, hill-climbable surface, the turn-left crest measured by how far that average is to the left, and the turn-right crest similarly measured by how far the average is to the right. It's more effort than just coding up a Design to do it, but doable -- perhaps even less jerky than my (untuned) solution. But the Design solution is easier to add driving instructions ("Turn left at the next opportunity") to.
Finding pedestrians is far harder, as pedestrians come in all different sizes and shapes and colors. They can be walking or standing, or change their mind and go the other way halfway across the street. They can show up out of nowhere between parked cars. If you see a ball (too small to be a pedestrian) roll across the street, there might be a pedestrian (child) following very quickly. Animals run out on the street all the time, and the less domesticated ones then stop in the middle. You hit a deer and bad things happen -- I once lost a student -- at the very least you get a huge car repair bill. Many of the pedestrians in the test video I gave the students were almost the same color as the background on the foggy day that video was shot. At night, you might see only a pair of reflector shoes bouncing up and down, or as little as a dark smudge occasionally blocking the oncoming headlights. Human drivers learn to make correct inferences from those kinds of anomalies, but how do your train a neural net? My summer kids didn't even try. But they did a pretty good job finding moving pedestrians wearing solid colors in daylight. Maybe neural nets could have found them too, but where do you get that kind of training data? You need thousands of video clips of pedestrians in solid colors walking past the camera in street scenes. That data doesn't exist. There are one or two databases of all kinds of pedestrians, which not even the experts' NNs can properly recognize half of them. It's a very hard problem. Do you want to be a pedestrian in a crosswalk when a NN-driven car is coming up the street? I don't. Especially not a Texas driver. Oregon drivers tend to be more careful, but last Sunday a car didn't stop for me. Fortunately, I was looking, and my vision is still pretty good (neither Oregon nor Texas are good states to grow old in).
Tom Pittman
2017 September 13