The IEEE is a professional society with stated ethical standards to which their members are nominally obligated to adhere. It is presumably IEEE members who write the bogus articles I regularly criticize, and the criticism often amounts to pointing out a violation of good ethics (although I don't normally say that). So much for "ethics." What people mean, and what the authors of this latest article (tacitly) mean when they criticize others in the so-called "AI" industry, is "Your moral values are different from my moral values, so you are Wrong." Which is nonsense.
If there is such a thing as Right and Wrong -- and who, after Trump's 2016 election (see remarks in my "Moral Absolutes" essay), can deny that? -- then the authors of this essay should be arguing for how AI can conform to those absolutes. But they cannot, because nobody wants to listen to such an argument, and they themselves probably do not want to be held accountable to the same standards they want to hold the AI vendors to. I say "probably" because finding inconsistencies in a person's life often takes some digging, and I lack the resources and access to do that. But I have never met nor heard of anybody willing and able to claim to be an exception.
Anyway, there are five authors listed, and two are responsible for each of the three parts, except the third part lists only one contributor. The first part is about "the ethics of exclusion," and boldly states that diverse opinions (read: values) should be allowed at the table where AI is being designed (see my blog post "The American Way" four months ago). Both authors have Georgia Tech emails, and the Georgia Tech email server bounced my email. So much for inclusion.
The second part wants to take issue with the ethics of AI training. There is none. The training data is a sloppy way of programming a computer, and the Religion -- ethics is about values, and values is Religion, which all the techies doing this stuff mostly deny -- the Religion of the developers is that all manner of system complexity can (and did) arise by the accumulation of large numbers of random events, so their training data is totally unsupervised. We have no scientific evidence that system complexity ever arose from random events (see "The Question" in my essay on the topic), which is why I refer to their opinion as "Religion" (believing what you know ain't so). That problem needs to be solved before ethics will enter any discussion they want to be part of.
The last part centers on what the author calls
Question 0: Should we consider some AI artifacts, either now or in the future, as persons?The question is irrelevant. IF it is possible to create a self-aware, self-reproducing robot to which this Question could credibly apply, then somebody will do it. There are no moral questions in human history that have not be answered both ways by some idiot trying to do The Wrong Thing. Ten years ago WIRED magazine ran an article "7 Experiments That Could Teach Us So Much (If They Weren't So Wrong)" one of which proposed mating a human and chimpanzee. My reply was
I think the chimp+human hybrid has already been tried, probably dozens of times. They just don't dare publish their results, because it looks so bad for the government-funded established Darwinist religion of this country. So people keep trying -- and quietly failing.I later learned that it actually had been tried -- in Nazi Germany, with the expected result. People do that.
So if it is possible at all -- we are decades away from that kind of high-quality artificial intelligence, as AI researcher Melanie Mitchell admits in her recent book (see my review "AI for Thinking Humans" four months ago), and there are good entropic reasons to suppose it can never happen -- some idiot will do it, and then the robots will start replicating (a thousand times faster than humans can, doubling in days, not decades) and quickly get out of control. Individuals will try to stop it and fail, and by the time the government gets involved, it will require a massive shut-down of whole areas of the country, so they will dither even longer, until the only way to stop the robots will be to nuke the whole North American continent (and probably Europe and Asia too). It will be a tough choice, but they will do it. And the remainder of humanity will huddle in fear in Africa and South America, a new Dark Ages brought on by a total rejection of everything electronic.
So the real Question is not whether we should grant these robots personhood,
but how are we going to stop them, once they get started. Remember, the
idiot who inflicts this on us does not believe in ethics (otherwise he
wouldn't do it), and is (erroneously) convinced that the robots will automatically
be Good and not Evil (which requires a conscience such as God built into
humans, but which these guys all deny that it requires any such thing as
Design to achieve).
I generally agree with most of your (stated or implied) conclusions in "AI Ethics: A Long History" [DOI 10.1109 / MC.2020.3034950] in last month's ComputingEdge, in particular, I prefer Good robots over Evil robots. Yet it continues to amaze me how many people propose to offer moral imperatives ("should" or "ought" or "must") without any consideration of the elephant in the room, which is that moral imperatives have no meaning at all except by reference to a moral value system, and any value system not based on moral absolutes cannot be any more obligatory or compelling than a set of personal preferences, such as "I like my personal preferences more than I like your personal preferences."
You-all seem to be associated with American academic institutions with no obvious desire to leave the country, so I suppose you prefer democracy -- at least that seems to be one of the values endorsed in your paper -- over the way they do things in China or Saudi Arabia, but why should you expect the rest of the world to agree with your preference? Other than Winston Churchill's famous quote, do we have any objective data to suggest that democracy is a moral absolute? Perhaps Xi Jinping or King Salman might disagree with you.
Bringing more people to the table will not solve the problem, because moral values are religion: they define for their adherents what is non-negotiably True and Obligatory, irrespective of external facts or other people's opinions. The only way to achieve consensus is to EXCLUDE anybody whose values disagree with yours. The megacorporations already do that, as you know. That's not the only place it happens, universities do it too. Everybody does it, because a productive, satisfying life apart that kind of exclusion is not possible. You don't have to like what I'm saying, but if you push me away, you only prove my point about exclusion and the futility of your own quest.
By the way, whatever you might consider to be an appropriate answer to your "Question 0," as soon as some idiot is foolish enough to build a self-conscious, self-replicating robot -- no matter whether it is disapproved or unlawful or not, if it can be done, it will happen -- there will be another fool judge eager to make that robot a "person" protected by the Constitution. Anybody willing to create such a robot obviously has no belief in moral absolutes, so such a robot (and its progeny) won't have any built-in conscience to be persuaded by appeals to reason. The only possible outcome from such a scenario is a civil war that only the robots can win -- unless the humans nuke the entire North American continent (and probably Eurasia also), and all artificial intelligence real or fake (think Neural Nets) will subsequently be absolutely forbidden forever (but at least for a couple centuries, until the cultural memory dies out).
I don't have to worry about it happening in my lifetime because true self-conscious, creative AI is many decades away, but if people were thinking about this in any depth, your Question 0 would have a very very different flavor.
Tom Pittman, PhD CS/U.Cal