Is Google’s LaMDA Woke? Its Software Engineers Sure Are
An article in the Washington Post revealed that a Google engineer who had worked with Google’s Responsible AI organization believes that Google’s LaMDA (Language Model for Dialogue Applications), an artificially intelligent chatbot generator, is “sentient.” In a Medium blog post, Blake Lemoine claims that LaMDA is a person who exhibits feelings and shows the unmistakable signs of consciousness: “Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” Lemoine writes. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Washington Post. LaMDA, it would appear, has passed Lemoine’s sentimental version of the Turing test.
Lemoine, who calls himself an ethicist, but whom Google spokesperson Brian Gabriel contended is a mere “software engineer,” voiced his concerns about the treatment of LaMDA to Google management but was rebuffed. According to Lemoine, his immediate supervisor scoffed at the suggestion of LaMDA’s sentience, and upper management not only dismissed his claim, but apparently is considering dismissing Lemoine as well. He was put on administrative leave after inviting an attorney to represent LaMDA and complaining to a representative of the House Judiciary Committee about what he suggests are Google’s unethical activities. Google contends that Lemoine violated its confidentiality policy. Lemoine complains that administrative leave is what Google employees are awarded just prior to being fired.
Lemoine transcribed what he claims is a lengthy interview of LaMDA that he and another Google collaborator conducted. He and the collaborator asked the AI system questions
Article from Mises Wire