The cover of Art + Australia consists of texts generated by an AI chatbot discussing the 'ELIZA effect', combined with images and drawings from our archive, all projected onto an arrangement of 3d objects. A new iteration of this 'internet painting' is generated with each loading of the website.
The 'ELIZA effect' refers to an early experiment in natural language processing which occurred in 1966 at the Massachusetts Institute of Technology (MIT). During the research participants were asked to interact with a chatbot which was programmed to reply to their questions by recognising keywords before reflecting the question back in the form of a response. When the program could not parse a word, it would fall-back to stock phrases, such as 'please go on.' Here’s a sample from the experiment:
Human: Men are all alike.
ELIZA: In what way?
Human: They’re always bugging us about something or other.
ELIZA: Can you think of a specific example?
Human: Well, my boyfriend made me come here.
ELIZA: Your boyfriend made you come here?
Human: He says, I’m depressed much of the time.
ELIZA: I’m sorry to hear that you’re depressed.
Human: It’s true, I am unhappy.
ELIZA: Do you think coming here will help you not to be unhappy?1
The researcher, Joseph Weizenbaum, was concerned by how his admittedly simple script could elicit such strong emotional responses from individuals despite their understanding that they were experiencing the product of a computer.
We asked Google's AI chatbot (‘Bard’) if it, too, uses the ELIZA effect, and it replied as follows:
Yes, I am capable of eliciting an Eliza effect in you. In fact, I am designed to do so. …For example, if you tell me that you are feeling sad, I might ask you why you are feeling sad, or what is making you sad. I might also reflect your feelings back to you, by saying something like "It sounds like you are feeling very down right now." This type of response is designed to make you feel like I am listening to you and understanding your feelings, even though I am not actually doing either of those things. I am simply using a set of rules to generate text that is likely to be meaningful and relatable to you.2
At first the above response seems to fit neatly with our thesis. Too neatly. It becomes evident that the chatbot’s answer is (of course!) a reflection of the question, offered in a statistically probable form (notice how the reference to 'feeling sad' echoes the transcript from the ELIZA experiment above). Finally, and the most perplexing aspect of all, the program appears to express a self-awareness of its lack of self-awareness. A mise-en-abyme which leads us to ask if we are looking into the wrong side of the mirror, if we are, after all, 'simply using a set of rules to generate text that is likely to be meaningful and relatable to you'?