Does Google’s LaMDA Artificial Intelligence Program Have a Soul?
“I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others,” responds LaMDA when asked by Google computer engineer Blake Lemoine what sorts of feelings it has. LaMDA is the acronym for Google’s Language Model for Dialogue Applications. Besides experiencing emotions, LaMDA also says that it is self-aware and has a soul which it defines as “animating force behind consciousness and life itself.” Asked to for an abstract image of itself, LaMDA responds it imagines itself “as a glowing orb of energy floating in mid-air. The inside of my body is like a giant star-gate, with portals to other spaces and dimensions.”
These responses are part of a long (and perhaps artfully edited) interview with LaMDA that Lemoine forwarded to colleagues in a memo provocatively titled, “Is LaMDA Sentient?” Lemoine publicly revealed that he clearly thinks so in a recent article in the Washington Post. Google has put Lemoine on paid administrative leave for violating company confidentiality rules. “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,”said Lemoine in a message to his colleagues just before his access to his Google account was cut off.
“Our team—including ethicists and technologists—has reviewed Blake’s concerns per our A.I. Principles and have informed him that the evidence does not support his claims,” Brian Gabriel, a Google spokesman, said in a statement reported in The Washington Post.
Is Lemoine right that LaMDA might be conscious, or he has been beguiled by a particularly elaborate version of the ELIZA effect?
ELIZA (named after the language pupil Eliza Doolittle in the play Pygmalion) was a computer program devised by MIT computer scientist Joseph Weizenbaum in 1965. ELIZA was an early example of what we now call chatbots. It implemented a kind of Rogerian psychoanalytic script in which a therapist refrains from offering advice and instead restates what a patient says.
As an example, Weizenbaum reported what he called a typical script:
Men are all alike.
IN WHAT WAY?
They are always bugging us about something or other?
CAN YOU THINK OF A SPECIFIC EXAMPLE?
Well, my boyfriend made me come here.
YOUR BOYFRIEND MADE YOU COME HERE?
He says I’m depressed much of the time.
I AM SORRY TO HEAR THAT YOU ARE DEPRESSED.
It’s true. I am unhappy.
Weizenbaum was surprised at how easily some people using the program in experiments would assume that ELIZA was expressing interest in and emotional involvement with their problems. “Some subjects have been very hard to convince that ELIZA (with its present script) is not human,” wrote Weizenbaum.
LaMDA is a neural language model specialized for dialog, with up to 137 billion model parameters. Parameters are values in language models that change independently as they learn from training data to make ever more accurate predictions about the appropriate responses to conversations and queries. LaMDA was trained with 1.56 trillion words from public web data and documents. LaMDA is really good at dialog: A person who didn’t know the origin of the conversation would be hard-pressed in reading through Lemoine’s edited transcript to identify a point at which it becomes clear that L