Google AI

Google denies claims that it’s AI is sentient

Image Source: Medium

The capabilities of IT corporations’ ever-improving artificial intelligence are continually touted. On the other hand, Google was eager to dismiss reports that one of their systems had advanced to the point of becoming sentient.

One Google engineer claimed that after hundreds of contacts with a cutting-edge, unreleased AI system named LaMDA, the software had attained a level of consciousness, according to an eye-opening story published in the Washington Post on Saturday.

Many in the AI field disputed the engineer’s assertions in interviews and public pronouncements, while others pointed out that his story demonstrates how technology may induce people to impute human characteristics to it. However, the possibility that Google’s AI is becoming sentient underscores both our worries and expectations for what this technology may accomplish.

LaMDA, which stands for “Language Model for Dialog Applications,” is one of several large-scale AI systems that can respond to textual cues after being trained on enormous swaths of text from the internet. They’re entrusted with identifying patterns and anticipating which phrase or words will appear next. Such systems have gotten better at answering questions and writing in convincingly human-like ways, with Google touting LaMDA as a system that can “engage in a free-flowing way about a seemingly unlimited variety of topics” in a blog post last May. However, the end effect can be crazy, strange, unpleasant, and prone to rambling.

According to the Washington Post, the engineer, Blake Lemoine, submitted evidence with Google suggesting LaMDA was sentient, but the corporation did not concur. Google’s team, which comprises ethicists and technologists, “examined Blake’s concerns under our AI Principles and have notified him that the data does not support his assertions,” according to a statement released Monday.

Lemoine announced on Medium on June 6 that he had been placed on paid administrative leave “in conjunction with an investigation into AI ethics concerns I was raising within the company” and that he could be fired “soon.” (He cited Margaret Mitchell’s experience as a leader of Google’s Ethical AI team until she was fired in early 2021 for her outspokenness on then-co-leader Timnit Gebru’s departure in late 2020.) After internal squabbles, including one over a research paper that the company’s AI leadership advised her to withdraw from consideration for presentation at a conference or erase her name from, Gebru was fired.)

Lemoine is still on administrative leave, according to a Google spokeswoman. According to The Washington Post, he was placed on leave for violating the company’s confidentiality rules.

On Monday, Lemoine was unavailable for comment.

The ongoing creation of powerful computing algorithms trained on vast troves of data has raised ethical questions about the technology’s development and use. And sometimes, rather than looking at what is currently achievable, advancements are evaluated through the prism of what might be possible in the future.


Opinions expressed by San Francisco Post contributors are their own.

Niall Moore

A social-media savvy and works as an IT consultant on a communication firm in Los Angeles. She manages her blog site and a part-time writer.

Leave a Reply

Your email address will not be published.