In the Spirit of Collegial Inquiry...

updated: 3 Aug 99

Artificial Intelligence and Human Ethics

Part One

JI:   I'm a graduate of the University of Southern California and West Coast University (Los Angeles), with bachelors degrees in Cinema and Computer Science, a masters in computer science, and am currently pursuing a bachelors in physics and doctorate in computer science. I often wonder if these degrees are a waste of time at my age (I am 43), but nonetheless, I am very unsatisfied intellectually, constantly looking for new things, new puzzles. I have a special interest in artificial intelligence (or stupidity, if that is what is required to make a machine "think" like a human being). Later in this [discussion] I would like to propose a question, if anyone is interested in this topic.

The current stream of thought seems to be on "crime and punishment". Many popular TV shows, books, and movies dwell on this topic. Here is my perspective. True criminals are a very, very small percentage of the human population; I would surmise the truly hardcore types are less than 5% of the population of America. Am I off here? Can somebody supply better statistics? The majority of us (American citizens) are decent, honest within reason, and rational. The indecent actions of criminals "stand out", or intrude, on the relative quiet of normal society. Therefore, the media tend to focus on what stands out (information theory: maximum surprise equals more valuable information).

We are obsessed with the actions of 5% of the population and ignore the more constructive, rational actions of the remaining 95%. The actions of 5% also result in laws that penalize the remaining 95% of the decent population. Witness the recent Columbine disaster -- because of the actions of two trench-coated psycho-kids, high schools all across the country will now be converted to armed camps. What about the 99.9999% of the kids who did not go on a shooting rampage?

Therefore, why do we dwell on these 5 percenters? Why not "accentuate the positive, eliminate the negative?" (a song from the 1940s). There will always be criminals. Let us isolate them when possible, but leave the door open for rehabilitation. Let us concentrate on enhancing the lives of the remaining 95%.

LDL:   Actually, I think, removing victimless crimes from the books, 5% is about right. But that is a lot of people causing major problems for the rest of us and it needs to be managed one way or another. We shouldn't just throw them in a cell and forget them.

With that said, 99.9% of my life is dedicated to enjoying it. It is not an obsession to spend a fraction of ones time in debate with others on such issues. I think, all in all, we tend not to spend a lot of time on this subject. It just happens to be the one we are on when you came in. We will quickly be onto something new ...

JI:   Now for a digression. A chapter in Arthur C. Clarke's book, Profile of the Future, written prophetically in the 1950s, discusses the evolution of machine intelligence, and the possibility that machine intelligence may one day surpass that of humanity. As Nietzsche once said, human beings are the crossroads between Man and Superman. Clarke argues that there is nothing to fear from our successors, because the higher the degree of intelligence, the higher the degree of cooperation.

But is this necessarily true? I witness the lack of cooperation in some of the high IQ societies and I truly wonder. Does our definition of intelligence have to be modified, or is it simply not true that cooperation and intelligence are directly proportional?

LDL:   To develop a computer that "thinks" like a human is a pretty unusual concept. To begin with, humans mix emotions, agendas, wants, needs, likes, and dislikes into their conceptual process and to emulate it would require a computer that is seeking meaning to its existence.

To be more precise, "We think, we think, and therefore, we think, we think we are!" The multi-facets of the human brain, consciousness, awareness, enlightenment, spirituality, and whatever must be simulated on a computer before the computer can simulate our thinking process. And I ask, "What for? If we are trying to make a computer a self-aware entity, isn't that a bit of a waste?"

Computers serve us better as tools and can do complex and lengthy calculations faster and better ... do we really want to give them emotions and mess them up?

ALP:   I find Clarke's futurist writings intriguing, though I'm not certain how intelligence became enmeshed and confused with emotions, social skills and/or altruism. I've read similar sentiments elsewhere, but am not at all convinced. Since causation doesn't necessarily march hand in hand with correlation, I find it difficult to generalize from even the noblest examples of humanity who happen to be, or have been, both highly intelligent and altruistic (e.g. Albert Schweitzer, Bertrand Russell, etc.) to all intellectually gifted individuals.

I can think of many who were surely the 'bright lights' of their generations even as they treated their spouses, children or friends very poorly. Einstein and Freud are two of the more famous and recent examples within the latter category. It is, after all, easy to cooperate with those in a position to advance one's career or reputation. IMHO, cooperative behaviour in private tells far more about a person's nature; public behaviour may be more of a combination of their nature and social aspirations, making it difficult to determine how much is altruistically motivated, and how much ego-driven. My 2 cents. :)

Intelligence is (for now) mysterious and complex, though I've little doubt that at some point in our neuroscientific quest, scientists will be able to decisively identify the neurological processes -- though perhaps not the thought content -- which contribute to the differences between 'average' and 'gifted' individuals. (My best guess is that the differences are minuscule.) If one instead takes the rudimentary conceptual route, intelligence can be viewed as 'a motor' which fuels all processes, perhaps even imbuing them with higher degrees of intensity.

Here, not only speed but fluidity and design contribute to the motor's higher level of functioning. Again, IMHO, this may or may not intersect with -- or specifically affect -- a person's preexisting cooperative tendencies, which I think tend to be borne of evolutionary survival trends (i.e. where the tendency toward cooperation is selected for as it ensures the viability of all concerned, reinforced or de-emphasized by a person's upbringing, peer relationships and education.

Also, let's not forget that the majority of actively 'cooperative' individuals, such as those who comprise volunteer organizations, are likely not intellectually gifted -- not even where the demrcation point for 'giftedness' is a relatively low 120-125 I.Q. Given the higher number of so-called 'average' and 'bright' persons, I think that it is they who represent the best hope for humankind in the future. Even assuming that our collective intellect rises dramatically over the next 10,000 years, it's likely that there will always be a majority group of this nature.

Anecdotally, I've met quite a few gifted people who are intensely affected by the atrocities recounted in the news. I am, unfortunately, among these ... or who harbour strong feelings about social issues ranging from homelessness and 'urban hunger' to the environment and animal rights. However, I can only think of one or two who have actually crossed the boundary between caring (which is in itself commendable and likely all too rare) and acting on their cooperative emotions. Okay, my 4 cents. {laugh}

LDL:   You've said "a mouthful" ... my answer to your two and four cent clichés! {GriN}

ALP:   Yeah -- I get carried away at times. You're not the first person to tell me this. :) It's feast or famine with me -- depends on my work schedule. This week has been sufficiently quiet to allow me to finally participate a little before I again return to work-induced hibernation. And so, another "mouthful" follows ... {smile}

On the topic of 2 and 4 cent clichés ... any more expensive than this and I would have avoided them like the plague, I quote the following from my much beloved The Dictionary of Clichés by James Rogers: "Among people who do pay attention to their phrasing, clichés can serve as the lubricant of language: summing up a point or a situation, easing a transition in thought, adding a seasoning of humor to a discourse." Well, usually ...

LDL:   The difference between the gifted and the not gifted being minuscule ... right on! Is it possible that the social skills and interrelationships of less gifted take time and space in the brain? Is it possible that academics and giftedness are related to isolation and the subsequent time to spend on them? I find that many of us "super genius" types had pretty non-existent social lives, although there are exceptions to the rule.

ALP:   I'm definitely not an exception, though admittedly, this has been by choice. I've always preferred reading to socializing, though I've been known to engage in witty or at least tolerably coherent {grin} conversation with a small group of friends on rare occasions. In person, even. ;) Afterward, I always need some time to myself to recharge. I too wonder whether or not isolation in itself fostered whatever 'giftedness' I may possess.

OTOH, several members of my family, on both sides, are gifted, three profoundly (which, IMHO, is not necessarily such a great thing -- I'm open to disagreement on this), so I tend to fall back on the old nature/nurture in balance theory. Still, I cannot help but wonder whether more of a social life in my 'youth' would have led to my being a average-range, bright person. Could be. It certainly would have helped stave off the social lack-of-fit which eventually led me to seek out high IQ societies ... not that I don't enjoy this group. {smile}

I know that my social skills take up very little time and space, though I can't speak for anyone else. {laugh} If it holds that the gifted are generally better able to connect diverse subject areas one to the other (I hope I've correctly understood what you mean by 'interrelationships'.), perhaps this is directly related to neuronal firing patterns and speed. I simply don't know. Firing (and arrival) *time* is self-explanatory, but I'm most interested in what the concept of space implies in the brain, given that its storage capacity is so very large.

For example, when someone either comes up with -- or laughs at -- a complex linguistic pun with only a second or two to process, it's probably clear to most that they have some general intellectual facility. Somehow, this individual must all at once retrieve information from several areas of the cerebral cortex, throw it all together to create an electro-chemical cognitive soup, and finally assemble it into either a pleasing verbal form (in the case of the punster) or knowing laughter. The task calls for a broad knowledge base (lots of stored material) and blazingly fast connections, not to mention access time. Of course, it makes me question the innate 'wisdom' of evolution when I ponder the fact that a feat of such complexity often inspires groans. {grin}

LDL:   Has anyone explored the complexity it requires to "play the game" that most people seem to play? I know highly intelligent people have been said to have more sexual dysfunction, less ability to relate socially, emotionally, and sexually ... is that a given? How much brain matter does it take to learn academics versus that required to learn how to get laid?

ALP:   You know the Cole Porter song "Birds do it, bees do it, even educated fleas do it..." :) FWIW, their cortexes aren't all that advanced, and they seem to be pretty busy this Spring. {smirk} I don't know about the plight of educated humans.

LDL:   I promise one thing in these discussions ... as older members know ... a different perspective. I am supposedly as brilliant as any here, but I doubt it. I see people who have read more, studied more, have more academic knowledge than I could retain even if I studied ... but I am iconoclastic, I am me, all authority for what I say and conceive rests with me. I don't do other people's opinions, no matter how recognized ... {blush} I have had quantum physicists in my home and we have had amazing discussions [even though] I know very little about the field! {GriN!}

ALP:   Most of the gifted people I know lead well thought-out, eclectic lives working in non-academic fields. FWIW, a good friend of mine (a woman in her early 80s) completed high school, and subsequently combined clerical work with raising her family. She has never studied science, nor has she ever been a 'reader.' I doubt that she would score high on an I.Q. test, nor do I care. Yet, she embraces knowledge, thinks clearly and is by far the wisest person I know, sometimes cutting through my own 'educated babble' with one good comment. :) Not bad for an 'uneducated' person.

LDL:   I deal in logic, as pure as I can make it and I don't always succeed. But it seems that clear thinking allows an ignorant (by the standards of the educated people on this list) person such as myself, to contend in the same arena and ... once in a while I even win one. {GriN!}

JI:   Thank you for taking the time to comment on my last post. I feel that you expanded the definition of "cooperation" to include altruism as well. Which is just as well, perhaps. The standard science-fiction scenario (Karel Capek's R.U.R., Colossus: The Forbin Project, Terminator, etc.) is that when our artificial creations reach the level of human intelligence, they begin to fight us -- to wipe us out. In other words, they inherit the worst imaginable territorial instincts of mankind. What Dr. Clarke was arguing was that our creations, who would someday surpass us in intelligence, would be more benevolent because they would be more intelligent. Cooperation may not necessarily imply altruism.

For example, the nations of NATO cooperate -- they don't try to nuke one another -- not necessarily through an altruistic motivation, but because it is in their mutual self-interest to cooperate rather than compete in the military realm. Working together, the European nations have a combined population greater than the U.S. or former Soviet Union. The nations of East Asia are, militarily speaking, about where the Europeans were during the 1930s, with China playing the role that Russia once played vis-a-vis Western Europe. Cooperation, rather than competition, could avert an expensive arms-race and possibly mutually destructive nuclear conflagration.

In the economic sphere, companies may cooperate when it is simply too expensive to develop a new technology alone. They can form a special group, a legal but temporary coalition, to share resources in order to develop the expensive technology. The manned space station is one example; the coalition of large companies to develop the next generation of integrated circuits is another. These legal, non-monopolistic coalitions are examples of cooperation based on self-interest. When the chips are developed, the companies then go their own way, competing (ideally) in a fair and civilized marketplace.

Arthur C. Clarke, the more I think about it, is probably right. The fear that people have of intelligent machines surpassing us (they might wiped us out!) is based on fear of the unknown, fear of what a new species would do to us based on primordial, ancestral memory of territorial battles. If I were an ultra-intelligent machine, I would have no need to wipe out humanity, my creators. Why would I want to kill my own parents?

I would let them live out their own lives in happiness. I, as an ultra-intelligent machine far surpassing humanity's own intelligence, would do everything I can to insure that humanity lives out its years achieving its dreams. While Man Version 2.0 goes out exploring the stars and discovering new principles of the universe, Man Version 1.0 will retire in paradise, living out his or her every fantasy -- but living a life of limitations imposed by an inferior intelligence. There is no need to wipe out humanity, but to cooperate with it.

JCC:   I always found those intriguing, particularly stories along the lines of Colossus. Certainly contemporary AI seems anything but threatening; I see great benefits in possibly creating analogue models of the human brain for study of pathologies as well as optimization of human processing of the sea of experience surrounding us. On the question of whether a more "evolved" intelligence will be more compassionate and non-threatening, the results may be a while in coming. It has been determined that Neanderthal and Cro-Magnon co-existed for a span of time in the eastern Mediterranean coast. Did the newer systematically hunt down the older species or gain the upper hand simply by outdistancing and effectively marginalizing the other in subtle competition?

CW:   Susceptibility to disease might be another reason.

JI:   Thank you for your response, Julia. Your analogy of Neanderthal vs. Cro-Magnon is something I never considered before. We know that somehow the Neanderthals didn't make it. I recall reading somewhere that, indeed, the Neanderthal and Cro-Magnon coexisted for a time. My guess is that there was subtle competition, just like the subtle economic competition between nation-states or cultures today; there was a winner and a "loser", but this probably occurred unnoticed over a span of hundreds of generations, over perhaps several millennia. It's not something easily recorded by pre-civilized human beings, let alone anthropologists after the fact.

Now speculating on the future, I believe that the advent of "intelligent" machines will take place slowly, gradually and subtly. It may even go unnoticed until it is a fait accompli. Such machines will be so fully ingrained in our culture and economy that we may take for granted that they can pass a "Turing Test". At first we may sense that intelligent machines are a radically different sort of intelligence than human intelligence. They will not necessarily be territorial, nor will they necessarily be "aggressive" in the human sense. They will desire to survive, a fundamental requirement of any living entity.

They will be curious, since information processing is part and parcel of intelligence. Beyond that, their true motivation will not be animalistic in nature. They may seek to reproduce, but in a more abstract way than an animal; e.g., a computer virus seeks to copy and attach itself; a work of literature seeks to be read and duplicated by the publisher, etc. We will initially create these intelligences to serve us in some way -- as reproducing, von Neumann space probes, as domestic servants and factory workers, as knowledge workers in industry, and so on. At some point, either as a networked entity or a series of linked but independent entities, the level of complexity will be reached where these artificial creations will evolve into some conscious entity.

Julia replies: I agree, and also don't feel this a threat of the R.U.R. genre. Something more of the scenarios of Greg Bear's "binary millennium" or A.C. Clarke's Childhood's End. The real triumph is the triumph of all that is best in us, and not simply of the human form as we now know it. I recall also a series of novels beginning with The Cybernetic Samurai, if I correctly recall the title ... very thoughtfully written.

No doubt, the nascent forms of AI are already extending the reach of individual human beings as an important partner. The wealth of experience of the veteran aircraft engineer, thirty-plus years of technical problem-solving skills, decision trees, etc ... can be captured for the benefit of novices in the field, and continually built upon and improved. In the broader picture there is Culture, the whole body of knowledge, images, concepts ... transmitted with considerable effort and time across the generations.

Our written records constitute the seed-concept of AI, to facilitate the immortality and growth of mind down the ages, even though the individual vessels perish after a few decades at best. Computer networks are simply the exponentiation of the same trend. Quite impressive, too! As a child I dreamed about Mars flights but didn't imagine the scope and ubiquity of computers at the close of the century.

CW:   Wanting to survive is not necessary for an intelligent machine. You will want to turn the thing off sometimes. It also isn't necessarily a quality of living things. Are you worrying about surviving when you are asleep and not dreaming (or even when you are dreaming)?

Proceed to Part Two

Return to Colloquy main page