In the Spirit of Collegial Inquiry...

updated: 9 Aug 99

Physics, Consciousness, and the Turing Test

TsC:   Since I'm a physicist, I'm enthusiastic to join the discussion [on universal theories]. I don't know if there will ever be a theory that can explain everything, but new theories are developed because of a good reason: we see flaws in the old ones. That doesn't mean the old theories are wrong. Newton's law of gravity, for example, is still as good as gold today.

If you're a civil engineer, you'll never need to know Einstein's relativity to build a house. You don't have to worry at all that your house would collapse one day because you missed a term in the Einstein's equations. Don't sue me if it does! {smile}

On the other hand, if you're an astronomer, you'll find the Newtonian predictions almost right in most cases, and the old theory often give you a good feeling of what is going on. However, you'll get much accurate result and even new phenomena by using Einstein's relativity. That's when you want the new theory.

We find new theories useful not just for their academic values. Very often, when theories are found to be insufficient in explaining the experiments (which improves over time), it means new discoveries and new applications. Quantum mechanics, for example, is necessary in the understanding of semiconductor physics, without which we'd still be using ENIAC to type our emails!

New theories often push our perception of the world to its limit and then break through it. The expanded view is presumably more complete than the old ones, because the both the new and the old experiments can be understood. The new world, as we understand it from the new point of view, provides new tools and new insight, helping us in turn improve on our experimental and theoretical technique. We're benefited by the new tools and ideas in the process. Then, new experiments finds flaws in the old theories.. This loop goes on and on ...

The limitation to whether we can/should/would stop asking deeper questions is, in my opinion, the inability to see the flaws of our theories anymore. That happens when our theory is good enough to explain everything in the experiments (on the elementary physics) and our instruments just cannot get any more accurate, due to technological limit or financial limit. One obvious example is the superconducting collider that was cut by the congress. While, we can't blame the congress or the scientists for this, we have already seen the problem. New scientific discoveries require exponential increase (over linear time) in accuracy, which requires exponential increase in money and technology. The financial requirement will, if it hasn't already, finally outgrow our society.

So, before we find the ultimate (objective) truth of the universe and stop the endless pursuit of the secret of the universe, I expect we'd be stopped by our budget first. In the mean time, let's find something new and use it for the good!

JI:   I found this statement to be incredibly significant. Thank you very much for this observation! I am sure that others as knowledgeable as yourself in advanced physics (and the actual politics and economics involved vis-a-vis the academics and government) may have come to a similar conclusion (or maybe not). I had never considered this angle before. Hopefully, as our knowledge and technology increases, we can find various "short cuts" and surprises. In other words, we can make significant discoveries by somehow cleverly undercutting the exponential growth in budget.

For example, the recent confirmation of the Einstein-Bose condensate was done in a university lab without a huge research budget. Correct me if I am wrong: I was under the impression that the experiment could have been done in a high school lab! The atomic laser was invented also without a huge budget.

Perhaps the "big science" projects may have to wait; i.e. those which try to model the beginnings of the universe.

TC:   Are pi electrons in ringed molecules non-local, i.e., [able to] move freely about the pi orbitals ? Are the atoms in a Bose-Einstein condensate non-local in this same sense?

TsC:   It is quite common for electrons moving from one atom to another. The same thing happens in metal. Here, 'non-local' just means that electrons does not 'belong' to the same atom/bond, but is shared among the whole molecule/metal.

Atoms in B-E condensate are another story. First, we need to know that there are two types of particles:

    - They have [fractional] spins: 1/2, 3/2, 5/2, etc.
    - They obey Fermi-Dirac statistics, which includes the Pauli Exclusion Principle: two such identical particles cannot occupy the same state. Examples : electron, proton, ..., He^3, ...

    - They have integer spins.
    - They obey Bose-Einstein statistics: no limit on the numbers in the same state. Examples: photons, He^4, ...

I'm not an expert on B-E condensate, but the idea is that when it is cold enough, all particles of a system can occupy the same lowest energy state. The system is thus in its ground state.

Electrons cannot do it as it belongs to a different group of particles. The ground state for electrons (in metal) is different. Since when the lowest energy state is occupied by an electron, the next one has to occupy a higher energy state, and so on. As a result, even in this ground state (zero temperature), some electrons have very high kinetic energy.

AMi:   Where do you think consciousness, whatever we understand by this, fits into the picture? ...

TsC:   I really have no idea about consciousness, it is such a complicated thing that I hope the computer scientist can tell us something about it. But I think since our mind 'exists' in our brain through electric signals, I doubt other forces (strong, weak, gravitational) have anything to do with it. As a result, I think our minds do not affect the universe.

JI:   Issues about consciousness and what it means to be aware impinge on the field of computer science when we are dealing with artificial intelligence (AI).

AI tries to emulate the reasoning ability or cognitive behavior of human beings. There are many computer architectures (neural net, parallel processing, etc.), programs (genetic algorithms, learning algorithms) in software, and theories which attempt to emulate this reasoning ability. By trying to emulate the characteristics of conscious human beings, perhaps we can gain an understanding of the nature of consciousness.

In the 1930s, mathematician Alan Turing proposed the Turing Test to determine whether a machine can truly think. By "think", we are also implying sentience. The Turing Test argues that, if a machine can perform the reasoning and cognitive functions of a human being under day-to-day, real-world circumstances, it is for all practical purposes a thinking being -- even though we don't truly know if it is really self-aware. Turing points out that, in our day-to-day lives, we don't often question whether or not a human being is truly conscious even though we are talking and interacting with that human. Therefore, why treat an intelligent machine with any higher standard?

Rather than debating the real merits about whether or not a machine can truly think like a human being, Turing proposed a test. Place a human examiner in a room with a teletype machine. The teletype machine is connected to another human being and a highly advanced computer. Neither the machine or other human being are visible to the examiner.

The examiner may then type in questions to both the human being and the computer. If the examiner, who is assumed to be a reasonably intelligent and perceptive human being, cannot tell the machine from the other human, the machine has passed the Turing Test. For all practical purposes, the machine can think!

As to whether the machine is truly conscious -- this question will probably not be answered by the pragmatic-oriented Turing Test.

No computer has yet passed the Turing Test!

The truth is that we have no provable definition for consciousness: Is it "self-awareness", "awareness of one's own existence", etc.? The debate has gone on for thousands of years and still hasn't resolved itself.

There are fundamentally two schools of thought in AI with regards to creating an intelligent machine. The Strong AI school believes that a computer can be conscious. According to Strong AI, consciousness is simply a series of operations represented by neurons or some other mechanism, which can be emulated by an algorithm. The Weak AI school believes that consciousness is not an algorithm and cannot be truly emulated by a machine. Essentially Weak AI claims that any machine that passes the Turing Test is not truly aware; it is only mimicking the processes of intelligence without being the real thing.

What school do I have a proclivity towards? I am neutral. The truth is, I simply don't know.

EM:   You note that no computer has yet passed the Turing test. There was a long developmental process in the chess playing skill of computers. I would guess the same process is now taking place with artificial intelligence.

JI:   The development of chess-playing computers is considered a branch of artificial intelligence. As you know, a game such as chess consists of an immense tree of different branches. In the real world, we can only store a portion of this tree which represents all possible moves. A chess-playing machine is simply searching this tree or a partial representation of the tree for the best possible move. There are algorithms called heuristics which eliminate the unfavorable branches and try to force a search on only the more promising paths in the tree. With increased speed and memory, it was only a matter of time before the computer would beat a grand master.

EM:   Sometime in the last decade I read that computers using the Turing test have been successful in duplicating the responses of a paranoid schizophrenic patient to the extent that the tester could not distinguish between humans and the computer. I will concede that the responses of the schizophrenic can be very tight, limited, constricted and predictable. Even the irrationality is constricted and predictable. The comparison is with a terribly damaged, incomplete, and usually non-functioning human. I am aware that there is a spectrum, or continuum, of severity in schizophrenia, and that it can vary greatly in the same individual over time. I would expect the Turing test in this circumstance was applied to individuals clearly diagnosed.

CW:   I am curious: in what way is the Turing test an objective test?

EM:   I hesitate to respond to your question, in that I am not sure I understand it. Do you mean objective as opposed to subjective? If so, I must hold off until a later time; I can't give it a fair consideration just now.

JI:   Are you referring to ELIZA, or some other program? All ELIZA (which was written in a programming language called LISP) did was search a list for a suitable response, and produce a canned reply. ELIZA did not pass the Turing Test because it was very easy to deceive, and to get an inappropriate response, revealing that ELIZA was a very simple computer program.

There are certain computer programs mimicking human behavior which have passed a Turing test only in a very narrow problem domain. That is, if you were to play a game of tic-tac-toe with a computer, you couldn't really tell if you were playing a computer or a human being. This is a narrow, limited field. In fact, they've trained pigeons to play flawless tic-tac-toe, so you couldn't tell if you were playing a pigeon, either.

I believe the true Turing test is in the general, cognitive domain of human beings. By having a normal conservation with an entity -- about the weather, about men or woman, about cars, about the latest news, etc. -- and assuming that the thing on the other line wasn't psychotic -- you couldn't easily tell if the intelligence on the other end was coherent or artificial. Eventually you could hit on some tidbit -- a joke, a phrase, a comment, etc. -- would give it away. Unless the thing on the other line was a genuine article, a true intelligence!

EM:   I have already noted in an earlier message the book, Gödel, Escher, Bach, [in which Hofstadter] addresses the question of selfhood, what is meant when humans say "I". This inevitably encompasses consciousness and a sense of identity, as well as awareness, sentience and feelings. Until a machine, robot, computer, can become a self, an identity, with an awareness of individuality in the same manner that humans do, I do not think they will be regarded by humans as anything but machines, however elaborate.

JI: Thank you for your thoughtful comments. I tend to agree with you here. I think we computer researchers must remember the word artificial in AI.

Return to Colloquy main page