Case in point, two of them, both come to light in last month's issue of Computing Edge, one of two rags I get for free from the IEEE, of which I have been a member some forty years now. Usually the pages are filled with selected items reprinted from the various journals published by the IEEE, mostly meaningless froth put together by academics who are required to "publish or perish" but have nothing significant to say. This issue has seven of those too, beginning with "Scientific Computing on Turing Machines" which appears to be an April Fool's joke discussing how to do significant computation on the overly simplified abstract computer Alan Turing invented to analyze the mathematical properties of computers. Pretty much every computer capable of being programmed is a TM, and I almost wrote him to say so, before I realized his whole piece is a joke. The next page a couple DoD (military) academics define "computational engineering," which seems to be little more than using computer simulation to study the physical properties of what engineers used to do with slide rules and calculators. "Cyberthreats under the Bed" looks at the hazards of internet-connected toys. Duh. Anything connected to the internet has security problems, it's the nature of the internet. Don't put your credit card numbers into the kids' toys. Really, it shouldn't even be possible, but many Americans have more dollars than sense. And so on.
The last two, which are the subject of this posting, are more of the
same, but they expose a particular kind of goofiness that seems to be on
the rise in the computing industry as people come into it without even
second-generation exposure to what really is true. The religion, as I
pointed out last summer, is Darwinism, the nonsensical notion that
millions of years -- or in the case of the computational version of it,
thousands or even hundreds of random test data -- will overcome what is
provably contrary to nature. Neural Nets (NNs) are still a well-funded
research project in most academic circles, but people are beginning to
see the cracks around the edges. Just not these two instances.
You can divide Internet security tasks into two sets: what humans do well and what computers do well. Traditionally computers excel at speed, scale, and scope...This is all true. He is the expert. But then he forgets it in the next paragraph. He seems to believe, against all the evidence he is professionally very familiar with, that computers will suddenly become human. It's fun sci-fi, but lousy science. He says:
Humans, conversely, excel at thinking and reasoning... They can find new sorts of vulnerabilities in systems...
Computers -- so far, at least -- are bad at what humans do well.They're not creative or adaptive. They don't understand context...
Humans are slow, and get bored at repetitive tasks. They're terrible at big data analysis...
...Here are possible AI capabilities:That's what he said humans do. Computers do repetitive things, like finding the same vulnerabilities. How is AI going to find new things unless it has been trained to find those new things -- but then they wouldn't be new, would they?
* Discovering new vulnerabilities -- and more importantly, new types of vulnerabilities...
* Reacting and adapting to an adversary's actions... This includes reasoning about those actions and what they mean in the context of the attack and the environment.That may have been true in the old-school AI thinking that was prevalent when Schneier and I were in school, but nobody does AI that way any more, it's all Skinnerian conditioned response (see "The End of Code" and "The Problem with 21st Century AI"). Either Bruce Schneier is getting too old to do his job and is fixing to retire (so he doesn't care if he's wrong, like the French king just before the Revolution, "Apres moi le deluge" = it all happens after I'm gone), or else he's in for a big surprise. The Bad Guys are humans, they will be discovering new types of vulnerabilities that the NNs have not been trained for, and the security people coming to Schneier for advice will be screwed. Their problem, not mine (I'm with the French king).
In the first subtitled section, "Why Explainable Deep Learning?" the authors show that they understand the difficulty selling their religion to the unwashed masses: "...the end-to-end learning paradigm hides the entire decision precess behind the complicated inner-workings of deep-learning models." It's not that the models are complicated -- they are not -- but that the decision process is utterly illogical, depending purely and only on the luck of similar features being detected in the learning process, and those similarities being rewarded by the training data. Because it's luck and not algorithmic (logical), the practitioners themselves cannot explain it, so they assume an intelligent complexity which they personally have not yet penetrated. It is human nature to find order and logic where none exists.
In their first explanatory section, "Educational Use and Intuitive Understanding with Interactive Visualization" they offer a three-dimensional visualization, where the third dimension on a flat display is color, then conclude:
These tools and systems provide effective interactive visualization for interpreting deep learning models, but most of them are limited to simple models and basic applications, and thus their applicability remains far from real-world problems.
Did you catch that? These people really believe that like the assumed success of the failed Darwinian hypothesis which is its theoretical foundation, vast numbers of random events in NNs will overcome the failures demonstrated by small numbers of them. They know they cannot humanly understand the vast numbers, and the visual analytics they offer cannot make it any better. But somehow they still believe. It's religion. It's faith. The example in their image shows their data for the handwritten numeral recognition example that convinced me of the utter failure of the NN methodology last year -- including the failure points clearly visible in the image, heavy overlap between the "2"s (red) and the "7"s (pale blue), and between the "3"s (lavender), "5"s and "6"s (two shades of light blue), with a dozen more "2"s and "8"s (green) and others scattered among the other numeral clusters. Real people have no trouble telling these numerals apart, and with enough tweaking of the initial synapse weights and/or the training data, NNs can be trained to tell them apart too, but apparently not very well. Just look at those scattered wrong numbers. Yes, the colors do show the problem, that much they got right. And they admit that the coloring will not help for big complicated models, but they still have faith.
The next section, "Model Debugging Through Visualization Toolkits" they conclude:
Although these visualization toolkits offer an intuitive presentation of the low-level information directly provided by deep learning models, it remains difficult for humans to understand the behaviors of these models at a semantically meaningful level.That says it. They do not understand what's going on, let alone explain it. There's more. In the next section, "Computational Methods for Interpretation and Explanation" they conclude:
In fact, the integration of these advanced computational methods with an interactive visualization ... remains a major challenge in real-world applications.and then in "Visual Analytics for In-Depth Understanding and Model Refinement"
...research issues on how to effectively loop human into the analysis process and how to increase applicability of explainable deep learning techniques have not been fully investigated.That means they need more money -- lots more money, because they will not succeed.
... However, little effort has been made in tightly integrating state-of-the-art deep learning models/methods with interactive visualizations to maximize the value of both. Based on this gap and our understanding of current practices, we identify the following research opportunities.
Opportunity 1: Injecting external human knowledgeThe title says it all. Basically, all successful artificial intelligence consists in human intelligence being injected into the machines. I have said this for a number of years, it follows from the Entropy law. They repeat that idea in a later "Opportunity":
Opportunity 4: Improving the robustness of deep learning for secure artificial intelligenceAs Schneier pointed out in the previous article, humans can do that, and the machines have no defense against it. Humans might be able to defend their machines, but to do so they need to understand what the machines are doing. These authors see the problem, but they are looking in the wrong place for solutions, as they conclude:
Deep learning models are generally vulnerable to adversarial perturbations, where adversarial examples are maliciously generated to mislead the model to output wrong predictions. An adversarial example is modified very slightly, and thus in many cases these modifications can be so subtle that a human observer cannot even notice the modification at all, yet the model still makes a mistake. These adversarial examples are often used to attack a deep learning model.
... Accordingly, one research opportunity concerning explainable deep learning is to incorporate human knowledge to improve the robustness of deep learning models.
We hope that these proposed directions will inspire new research that can improve the current state of the art in deep learning toward accurate, interpretable, efficient, and secure artificial intelligence.
They know and see the evidence, but they believe otherwise. It's religion.