Sunday, March 16, 2008

On The Concerns of Joseph Weizenbaum by Anthony McCarthy

People who have endured much of my writing know that I’m interested in the moral distinction between living beings and objects. Ok, yeah. So I’m obsessed with the questions about it. Most of the time the questions about the differences are put in either quasi-mystical terms or, on the other side of the coin, would be, rational-materialist terms. I generally don’t. My interests don’t fall into those two habits of thinking, for me the public discourse on the subject is, most usefully, a matter of political ethics. The lesson I gather from reading the news and history is that first step towards creating hell on earth is to either ignore or deny that living beings are in possession of inherent rights and worth. Sentient beings are in a different realm of existence from inert matter, they should never be items of mere commerce. Non-sentient life is also not wisely treated as if it was merely inert matter. I won’t go into that here, though.

In humans, who are able to reason for themselves, those rights include the personal exercise of that vitally important faculty. It is as much a right to be able to think independently as access to adequate nutrition, clean water and those other things necessary to sustain life. In just about every case, obtaining nutrition and the other physical requirements of life depend on the right, individually and collectively, to practice reasoning.

I have no doubt, at all, that seeing living beings as things without these inherent rights will be taken as permission to allow people to act badly, as badly as they figure they can get away with for their own, selfish reasons. It is only the full acceptance that other people and living beings possess rights that prevents bad behavior. Trying to prevent harm by analyzing ethics in terms of transactions among selfish entities does nothing to prevent the problem. It just makes the basic struggle a slightly more complex race to hell.

During a session at one of the more articulate and rational of the materialist blogs, where I sometimes go to test the weakness in arguments, someone asked my opinion of Alan Turing, I assumed as the inventor of the famous Turing test. My answer was that just because a machine appeared to be thinking in the same way that human beings do didn’t mean that it really was because we don’t have a real understanding of what thinking is. We don’t even know if what we call “thinking” is one or many kinds of events or even if we might mistake one kind of these “events” for another or vice-versa.

For someone in the middle of the 20th century to assert that we could make that distinction on the basis of appearances was only slightly more unrealistic than for one to assert that in 2008. We don’t have and, I suspect, will probably never will have sufficient understanding of what thought is to consider that judgment to be something within science. Pretending that the conclusions drawn in that kind of “test” are reliable is one of the failings of much of what gets called science these days.

If it's as bad an idea to believe that a machine can think as it is to let it go as unstated that people don't have the same moral status as inert objects is not a question that can be answered with science but it is one that we are going to have to answer due to impending exigencies.

So I was a bit sad to read the obituary of Joseph Weizenbaum who was both an early pioneer in artificial intelligence and one of its early critics. There isn’t time to go into much of what he wrote on the subject but his thinking should be taken seriously by anyone interested in these issues. From what I’ve always read, one of the early things that alarmed Weizenbaum about his field was that people mistook the psychoanalytic game he invented, Eliza, to be a thinking entity. It was to his credit that he was wise enough to recognize the dangers in that kind of mistake. It’s a very rare academic who can exercise that kind of objective critical wisdom about their own work. This is a succinct statement of the scope of the problem.

"The relevant issues are neither technological nor even mathematical; they are ethical," he told the Globe in 1981. "Since we do not now have ways of making computers wise, we ought not now give computers tasks that demand wisdom." Mr. Weizenbaum advised outlawing "all projects that propose to substitute a computer system for a human function that involves interpersonal respect, understanding, and love."

By contrast, in the obituary, one of his colleagues at MIT, Patrick Winston, said "Viewed from the distance of time, much of what he worried about seems quaint today, especially his concerns about whether experience-lacking computers would make bad decisions on behalf of us experience-grounded humans,"

There is every reason to believe that as the national security apparatus buys stuff from politically connected, profit making, I.T. firms that purport their products can think in just these ways, Weizenbaum’s concerns will quickly become undeniably less quaint. Even more ominous is the prospect of profit making businesses using the same kind of stuff in its efforts to wring the last cent out of the labor of humans, dispose of those who it suspects to be insufficiently profit generating, and the pillage of the living environment. The legal fashion these days is to pretty much allow more leeway in such stuff to anyone with money and power than is safe for a decent society.

The evidence available from the real life use of psychological “science” could provide a useful model of what that could get to be like. As an example, anyone who was subjected to the use of “psychological science” * by business or the courts might have a good idea of the possible problems that will come from having computers making decisions about you. People have lost jobs, their children, their freedom and, infamously in such places as Texas, their lives on the basis of the application of what was officially, but not really, called psychological science. There is every reason to suspect that as I.T. becomes an established industry that the financial and so legal pressures for it to become officially “valid science” and to be retained as such, will be even more difficult to resist.

It is unwise to give the law and business the power to allow the automation of decisions about the freedom and rights or real humans because experience shows they do not have the wisdom to make that choice. When the word “science”, with its prestige and unthinking social respectability is injected into the discussion by those who can financially profit from the adoption of technology, judges can go all gooey in the head. And wave the prospect of a few dollars in their faces and businessmen have been known do anything. Wisdom would be to keep important decisions impinging on real people and other living beings as far away from being automated as possible. My guess is that your chances are better with a mediocre person forced to make a decision without recourse to simulated thought than a machine programmed by anyone. I have a feeling that once the habit of relying on computers to simulate thought for us is entrenched, it will be even harder to overturn them than a judicial ruling.

* In some ways this kind of psychological testing might be a good model of allowing automated “thought” to make decisions about peoples’ lives. Despite what another of my opponents at another materialist blog asserted, what might be the most absurd of them all, the Rorschach Test, is still in wide use. Commercially produced and administered psychological testing can be ordered by courts and often is in all kinds of cases. Businesses and educational institutions often use them with full legal authority in hiring and retention. Often there is little to no scientific evidence that the test reveals anything real at all. Some of the most widely used tests have either no or quite ambiguous validation. The still used Rorschach Test began as a parlor game in Vienna, for Pete’s sake. Apparently, though they would have passed into the public domain decades ago, there is still some attempt to suppress access to the images themselves.

Note: There are other citations I'd like to make but as they are contained in pdf: files and those are making my computer crash just about every time I try to open one up these days, I will not be using them here.