Remembering Eliza, one of the first chatbots: Lessons, warnings it holds for AI today

Wait 5 sec.

In 1966, at a lab at the Massachusetts Institute of Technology (MIT), computer scientist Joseph Weizenbaum unveiled one of the first chatbots in history: Eliza.It ran on a computer that was among the most advanced at MIT at the time — the IBM 7090 — and could be accessed through a typewriter-like terminal.Eliza had different “scripts” — or ways of interacting — and could mimic a math teacher, poetry teacher or a quiz master, among other things. But its most famous script was called DOCTOR, which emulated a therapist.Weizenbaum would later write about the anthropomorphisation of ELIZA, which, in his own words, led him to “attach new importance to questions of the relationship between the individual and the computer”.Eventually, the myth-making around it reached such an extent that the tendency or pattern to attribute human qualities to computers came to be known as the ELIZA effect.The scientist too later spoke about the excessive reliance on computers, and would argue that no matter how impressive the machines seemed, what they pulled off could not amount to real understanding.These concerns, and the debates that followed, still matter today as we navigate a world with rapidly developing Artificial Intelligence (AI) tools.Story continues below this adWeizenbaum was Jewish and fled Nazi Germany with his parents, arriving in the United States in the mid-1930s.In 1955, Weizenbaum was part of a team at American conglomerate General Electric that automated some key banking operations for the first time.He also developed a programming language called SLIP or “Symmetric Lisp Processor”. This was part of an approach that worked with sentences, instead of numbers as computing had done until then.Weizenbaum was invited to join MIT’s Project MAC, a Computer Science lab. Among other things, it was the first to build an interactive time-sharing system, where multiple users could use a single computer system simultaneously.Story continues below this adWeizenbaum built ELIZA at the New England university between 1964-66.What Eliza didIn the introduction to Computer Power and Human Reason: From Judgment to Calculation (1976), Weizenbaum breaks down how Eliza works in extremely simple terms.“I composed a computer program with which one could converse in English. The human conversationalist partner would type his portion of the conversation on a typewriter connected to a computer, and the computer under control of my program, would analyse the message that had been transmitted to it, compose a response to it in English, and cause the response to be typed on the computer’s typewriter,” Weizenbaum wrote.He explains that the program consisted of two steps: a language analyser and a script.Story continues below this adA script, he describes, is a set of “rules rather like those that might be given to an actor who is to use them to improvise around a certain theme. Thus, Eliza could be given a script to enable it to maintain a conversation — play a specific conversational rule.” Each time an audience member types something, ELIZA could examine it, looking for keywords that have entries in the currently active script. These words were then ranked and substituted, resulting in a response.One interaction of the DOCTOR script, which was that of a psychotherapist, was reproduced by Weizenbaum in a paper on ELIZA. It goes as following, with the text in all caps being ELIZA’s responses:“Men are all alike.IN WHAT WAYThey’re always bugging us about something or other.CAN YOU THINK (: A SPECIFIC EXAMPLE)Well, my boyfriend made me come here.YOUR BOYFRIEND MADE YOU COME HEREHe says I am depressed much of the time.I AM SORRY TO HEAR YOU ARE DEPRESSED….”DOCTOR ended up becoming ELIZA in popular perception.A number of practicing psychiatrists seriously believed the DOCTOR script could grow into a nearly completely automatic form of psychotherapy, Weizenbaum wrote.“If the method proves beneficial, then it would provide a therapeutic tool which can be made widely available to mental hospitals and psychiatric centers suffering a shortage of therapists,” one therapist wrote at the time.Story continues below this adWeizenbaum also documents an incident with his secretary who started conversing with ELIZA in the DOCTOR script.“After only a few interchanges with it, she asked me to leave the room. Another time, I suggested I might rig the system so that I could examine all conversations anyone had had with it, say, overnight. I was promptly bombarded with accusations that what I proposed amounted to spying on people’s most intimate thoughts; clear evidence that people were conversing with the computer as if it were a person…”Weizenbaum wrote that he had not realised that “extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people”.Eliza reimaginedWith Computer Science evolving rapidly, the code that constituted ELIZA was never published and reproduced.Story continues below this adAnd the original code was only discovered in 2021 among a stack of Weizenbaum’s papers. It had to be copied by hand by Stanford professor Jeff Shrager who now works on a digital archival project of Eliza along with a team of multi-disciplinary academics across the world.What it means todayIt is critical in Computer Science history as it was the first to demonstrate the Turing test (how human-like a machine’s responses are) of a machine replicating human language. Of course, it also set off the obsession with getting computers to talk and interact with us, leading us to this moment in history where we are able to generate personalised videos, images and text at the drop of a hat.Digital Humanities professor David Berry at the University of Sussex, who is part of the digital archiving project along with Shrager, tells The Indian Express that “ELIZA is is a 420-line program written in an obscure programming language which is radically different from the LLMs (large language models) like ChatGPT, a gigantic system with billion of parameters”.“Eliza can run on any computer today and consume hardly any electricity, whereas ChatGPT consumes vast quantities of power,” Berry said.Story continues below this adThe contemporary LLMs, which are powered by huge data centres, require 0.14 kilowatt-hours (kWh) of electricity, equal to powering 14 LED light bulbs for 1 hour, as per calculations by The Washington Post.Berry also talks about how ELIZA “offered a crucial early warning about human susceptibility to computational deception”.He adds that “examining ELIZA’s source code helped to demonstrate that convincing human-computer interaction does not require genuine comprehension, rather, it can emerge from clever pattern matching and careful interface design that exploits human cognitive biases”.“Even modern large language models, despite their impressive capabilities, fundamentally operate through statistical pattern recognition rather than genuine understanding,” Berry says.