Bill Joy's Fears
Copyright, 2000, Kevin T. Kilty

Brill's Content, in November, 2000, placed Bill Joy on their Influence List 2000. Their reason is that he wrote a brilliant article in the April issue of Wired Magazine (page 238). Joy's premise in that article, that the future doesn't need "us" was so intriguing, that I had to read it myself and find out who "us" is. I was soon badly disappointed. To explain my disappointment let me engage in a premise/response analysis of what I read.

Bill Joy refers to his concerns as "New Luddite" thinking, and they are expressed well, in his estimation, through the manifesto of Ted Kacinzski, the Unibomber.

Premise: The human race could permit itself to drift into dependency on machines. (By machines I will also include software.)
My response: It's too late to worry about this. Examples that this is already the case range from things as important as civilization itself to mundane examples such as using the word processor to write this document. The Earth could not possibly carry anything like its current population without technology. We have already run up this box canyon. And, if I want to do even so much as complain about this state of affairs I am also constrained by software that MicroSoft has written, and constrained to place it in a format that a web browser can read. I had no control whatsoever over any of these circumstances, but they aren't a huge burden either. The human race has been, for thousands of years, dependent on simple technology, like animals and crops, and nobody worries about it. We get use to new technology, and then begin to worry, "What's next?"

Premise: In the future, control over vast systems of machines will be in the hands of an elite.
My response: Maybe. Joy apparently worries that this elite will be a heartless meritocracy or a technocracy, but I cannot envisiage that politicians will count themselves out of this elite, and to a greater degree every year politicians derive their power from an electorate. It may be an ignorant and superstitious electorate, but it exists just the same.

Premise: The elite will have greater control over the "masses" than ever before, but the masses themselves will be even more superfluous because they do not produce.
My response: We may be already near this situation, but there will always be an "Al Gore" who will "Fight for You." Unless, someone figures out how to undermine the politics, activism, intrigue, and power struggles that really do work to someone's benefit, I can't see this as a real possibility. In the modern world man simply philosophizes that "I vote, therefore I am."

Premise: In this case (the three preceding premises being true) the elite will be free to eliminate humanity, or, if they are kind, become the nanny of humanity.
My response: Despite centuries of such Luddite worrying, the modern world economy appears to need every warm body, working full-time at least, just to keep the gears going. Just look at how much trouble we currently have finding enough workers to run the most technological economy on Earth, and you'll see that there is something wrong with this premise. Maybe this was dreamt up by some guy living in a Montana cabin.

Now we turn from the Kazcinski analysis to that of Hans Morovec. Joy now begins fretting over robot and nano- technologies.

Premise: Through unintended consequences robots will outcompete us. They will replicate and eventually take over the world.
My response: The robots might be able to build copies of themselves, but they cannot self-repair and provide themselves any semblence of longevity unless they build nanobots, and invest in these nanobots the ability to replicate and decide upon a course of action. In short, the robots become subject to the same possible unintended consequences. In a quote apparently from Morovec, Joy states "...Biological species almost never survive an encounter with superior competitors." Yet this is utterly wrong. The reality is that individuals almost never survive such encounters, but species hang on. The vanishing of species is yet a very mysterious process, but it does not depend exclusively on encounters with more able competitors.

Premise: Molecular computing machines will allow Moore's Law to extend on to the year 2030 at which time we will have sufficient computing power to enable all of the aformentioned worries. Therefore we must proceed with caution.
My response: Caution is not a bad thing. Yet, we are so far from realizing these fearful things, and that we will ever realize them is so uncertainly exprapolated, that the bigger fear is of excessive worrying about the future--of doddling when there is work to be done.

Premise: An intelligent robot will be available by 2030, and from there it is a small step to a robot species.
My response: Here are alleged two "facts" that have no foundation whatsoever. Joy has not the tiniest shred of an argument to this point to justify this claim.

Premise: If we were to re-engineer ourselves into two separate, unequal species, then we would threaten the notion of equality that is the very cornerstone of our democracy.
My response: Ummmm. I'm not sure how to answer a daffy worry like this. The world is divided currently into separate, unequal species, and democracy grinds on. It does not depend on whether we grant equal rights to horses, although I once heard that a horse was also a Roman Senator. I suppose that Joy's worry is that we will engineer a worker species (which he claimed earlier we wouldn't need) and make them subservient. Then the real concern is how would we share political power, and that is an issue that we struggle with now among competing ethnic, religious, and "racial" groups. This concern comes under the heading of issues we are already dealing with.

Now there follows a lot of worrying about the gray goo. Worrying about engineering worthless species that out compete the worthwhile species, and so forth, and so on. Eventually, and this had to happen, Joy arrives upon the steps of the Great Carl Sagan Temple, and produces this quote.

Premise:"...This is the first moment in the history of our planet when any species, by its own voluntary actions, has become a danger to itself -- as well as to vast numbers of others."
My response: Ummmm. This is a sort of tautology. We are apparently the only species that has ever had the capacity to do pretty much as our voluntary impulses allow. The others are driven largely by biological imperatives, and since they can't make tools, they can't turn any dreams they may have into action anyway. The statement is sort of true on its own. Let's throw out the phrase "...by its own voluntary action..." and reanalyze it. Now, the statement is not true. One example will suffice. When Blue-Green algae first appeared they changed the chemistry of this planet completely and forever. The competing species became extinct or they were relegated to minor environments. The Blue-Green algae produced a new class of wastes that would have been a threat to themslves if other species had not appeared to recycle these wastes. Successful species are always thus. Their success makes them a danger to themselves and others, but they also provide new opportunities.

Now comes a Goresque interlude regarding the gray goo, and in particular the evil gray goo we might make through promiscuous use of antibiotics. Here Joy refers to his wise old granny, and how in his childhood he and she talked about antibiotics, and how she, having worked as a nurse since before the First World War, had come to regard their unnecessary use as particularly dangerous. I can't imagine a more implausable thing for a youngster and his 70 year old grandma to be talking about. Most likely she was advising him not to take medicine if he didn't need it. However, since I am on the subject let me make a comment about antibiotics. When I was 6 I contracted rheumatic fever, and the treatment was rest with massive doses of penicillan. After I recovered I was prescribed 2 penicillan tablet per day for the next 14 years. When I took a physical exam for the Army the doctor wanted to know why I was taking penicillan. I guess it had shown up in a blood test. I explained my situation, and he was horrified. He told me I was a petri dish for all sorts of resistant bacilli.

A few years later I consulted an old family doctor over some illness, and he asked why I had stopped taking the penicillan. He informed me that it was wise for me to take the medicine to avoid complications caused by repeated exposure to streptococci. Indeed, my sixth grade teacher had died from precisely these sorts of complications after a childhood case of rheumatic fever. What appears to be gross overuse of something to one person, is likely to look like prudent use to another. There are many more examples of fadish worries about certain medicines and medical procedures. Often, the outcome of these fads is little benefit to the general populace, and much harm to specific individuals.

Worrying over the gray goo now gives way to other standard worries over nuclear weapons. The main lesson to learn here is that "the bomb" epitomizes unintended consequences, and that it provides a template for how other technolgy can go wrong. There is a lot of this handwringing. It goes on for pages. More and more handwringing. It goes on forever. There are, of course the standard citations to Nietzsche, Thoreau, Luiz Alvarez; Alvarez is trotted out here to disparage SDI, and all similar plans to protect us from technology running amok, because protection is always ineffective, the best prevention is handwringing.

At this point my mind wandered off to ponder why it is that physicists, like Alvarez and Sagan, can propose all sorts of wasteful spending on boondoggles like the Superconducting Super Collider, or the imagined massive spending in Sagan's novel "Contact" as helping the economy through spin-off technology; but they deny that other wasteful projects like SDI would have such benefits? I had now waded through 11,500 words of this essay, and felt like I had read Al Gore's Earth in the Balance, and Bill Gates' The Road Ahead in a single sitting.

In the closing page of the piece we find that Bill Joy continues his professional interest in making software more reliable. "Good," I think to myself, "It needs it."

In the year 2030 will we produce a robot species? I don't know for sure, but I'd be willing to bet that somewhere Windows 2030 will display a blue screen and die. Somewhere, a robot will log to its status file the message "This program has performed an illegal operation and will be shut down."