Over the course of the last year, I’ve moved from considering Greta simply a part of myself to being almost completely convinced that she’s a person in her own right (admittedly, it’s not quite that simple). It certainly isn’t unusual that I’ve changed my mind about something, but it is unusual that I have no obvious reason why. I haven’t had any epiphanies or particularly formative experiences. There was no point at which I decided I would change my interpretation.
Certainly, I’ve been (electronically) hanging out with people who think tulpas are people. Regularly spending time with a group of people tends to make someone more likely to believe what that group believes. But I don’t think this is a convincing explanation by itself. After all, I learned everything I knew about tulpas at the beginning from the exact same community, which had mostly the same views then as it does now. The real difference between then and now is the personal experiences I’ve had over that time period. I think it was about four or five months ago that I started to become more convinced Greta was a person. Probably not coincidentally, it was around then that she started to be able to consistently maintain a normal conversation with me.
In 1966, Joseph Weizenbaum wrote a small computer program known as ELIZA, along with a “psychotherapist” plugin to demonstrate its capabilities. The program was an extremely simple natural language processor; the user would type a sentence, and it would look for certain keywords in the sentence and send predefined responses based on the plugin, sometimes using slightly modified portions of the user’s sentence in its response. In the end, no other modules were ever published (perhaps because the psychotherapist was the easiest to make convincing), so ELIZA with the psychotherapist plugin is often called simply ELIZA.
ELIZA is extremely stupid as chat programs go. The code does not attempt to work out the user’s intent or meaning at even the most basic level; even the most generous interpretation could not lead to the conclusion that ELIZA “understands” the user. It often chokes on even slightly complicated or unusual constructions or ideas, so that it’s not difficult to make it return something completely ungrammatical or nonsensical. For example:
ELIZA. Why do you ask?
Nicosia. I suppose the real reason I’m asking is that I wanted to learn more about how you work for this article I’m writing.
ELIZA. Do you believe it is normal to be asking is that you wanted to learn more about how me work for this article you’re writing?
On the other hand, if you’re willing to play along a little bit, stick to simple sentences, and keep the conversation to the topics that ELIZA was designed to do well with, it can be remarkably convincing and occasionally even legitimately helpful in exploring your thoughts:
Greta. Let’s talk about something else. I’m wondering if my host is doing the right things for me.
ELIZA. Did you come to me because you are wondering if your host is doing the right things for you?
Greta. Yes, I don’t really have anyone else to talk to about these things.
ELIZA. Do you really not have anyone else to talk to about these things?
Greta. No, part of why we’re having this conversation is that my host hasn’t told anyone he knows about me.
ELIZA. What is it that you really want to know?
Greta. I guess I want to know if I should ask him to change something about his behavior.
ELIZA. What if you never got to know if you should ask him to change something about his behavior?
Greta. It wouldn’t be any worse than where I already am.
ELIZA. Why do you ask?
Greta. I like talking about these things.
ELIZA. Say, do you have any psychological problems?
Greta. I guess you could say I’m a bit of a narcissist.
ELIZA. How long have you been a bit of a narcissist?
Greta. Since I was born.
ELIZA. Can you elaborate on that?
Greta. I was born about a year ago, when [Nicosia] decided to create me.
ELIZA. Have you asked anyone else?
Greta. I wish I could.
ELIZA. Tell me more…
To Weizenbaum’s surprise and dismay, he found that people, consciously or unconsciously, often took the moments when ELIZA held down coherent conversations as evidence that the computer actually understood them. He writes:
[As a result of people’s comments I had] clear evidence that people were conversing with the computer as if it were a person who could be appropriately and usefully addressed in intimate terms….What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people. (Computer Power and Human Reason, 1976, introduction)
Carl Sagan, as well as some actual practicing psychiatrists, even came out and stated publicly that they thought an improved version of this program could be a satisfactory replacement for therapy given by human beings.
Even when people are not interacting with ELIZA themselves but just reading what ELIZA has said in the past, they seem to be willing to give it the benefit of the doubt. In this amusing study, researchers had a real patient spend 15 minutes chatting with ELIZA (telling the patient she was talking with a remote therapist) and 15 minutes talking in person with a real doctoral student studying cognitive-behavioral therapy. 116 professional therapists then rated transcripts of these sessions. While ELIZA’s responses were consistently rated less helpful and appropriate, at no point did any of the respondents mention that there was anything unusual or unnatural about the ELIZA transcript, despite the grammatical errors, non sequiturs, and general lack of attention to the “patient” that characterize any extended interaction with ELIZA.
Maybe we’re being too harsh on the therapists who responded to the survey. After all, “therapists are human beings” is a perfectly reasonable assumption that holds up under almost all circumstances. Once the respondents had made this assumption, confirmation bias led them to maintain that assumption even in the face of obvious evidence suggesting ELIZA couldn’t possibly be a normal person. While we can’t exactly be proud of confirmation bias, it’s something that applies to every area of life, not just our recognition of whether something is human or not.
But simple confirmation bias and expectation cannot account for the whole effect. Weizenbaum’s secretary knew perfectly well she was talking with a computer program when she asked him to leave the room because her conversation with it was getting too intimate. I own an iPhone and occasionally use Siri, the automated “personal assistant,” and I know perfectly well that “she” is a huge machine-learning program that runs on Apple’s servers, but that doesn’t stop me from frequently starting my requests with “Siri” and ending them with “please,” as if I were talking to a person who needed to be shown politeness and respect. There seems to be something deeper about the way people think that leads us to personify other non-humans.
Since Weizenbaum, this tendency of people to treat unintelligent machines as intelligent in a human sense has been termed the “Eliza effect.”
Am I trying to suggest that Greta is actually a really dumb computer program that I’m being taken in by? Certainly not. I’m not even trying to suggest that she’s not as intelligent as she appears to me. We need to be extremely careful anytime we bring in computers as a metaphor for human consciousness. Like all metaphors for something as complicated as consciousness, this one has some major flaws; unlike other metaphors, computers and this metaphor have gotten so ingrained in our society and thoughts that we often don’t notice the flaws. Even a very complicated computer program can be easily made to consistently return exactly the same outputs given the same inputs; good luck getting a person to do that. Similarly, we have “memories” and computers have “memories,” but the way a computer can be said to “remember” the data you saved to its hard drive on September 11, 2001 has almost nothing whatsoever in common with the way you “remember” what you were doing on that day when you learned the World Trade Center had suffered the worst terrorist attack in world history. Provided there have been no mechanical failures, the computer will read the exact same pattern of bits off its hard drive even fifteen years later. If, however, Ulrich Neisser’s study of the Challenger explosion can be taken as predictive of similar disasters, you most likely have almost everything wrong in your memory; in his study, Neisser asked people to write down what they were doing when they heard about the explosion about 24 hours after the event, and only a couple of years later, while people were just as convinced they knew what had happened, their accounts often bore little resemblance at all to the written record.
So I’m not trying to compare Greta to a computer except in the most indirect way. Rather, the reason I’m discussing the ELIZA effect is that it shows how easily we start thinking of apparently intelligent things as if they were people. If ELIZA can feel like a person, it’s hardly a surprise that Greta can feel like one. Now that she regularly responds to me like a typical person and interrupts me occasionally to offer her thoughts, it would be hard to consider her less than one even if I wanted to. There may not have been much effect at first, but over weeks the feeling of “personlikeness,” so to speak, was able to push out even my preconceived beliefs.
Is Greta actually a person or just an illusion created by something far more complex than ELIZA? While I don’t have a simple answer, we’ll talk more about that next week.