Suspended Google engineer Blake Lemoine made some serious headlines earlier this month when he claimed that one of the company's experimental AIs called LaMDA had achieved sentience — prompting the software giant to place him on administrative leave.
"If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics," he told the Washington Post at the time.
The subsequent news cycle swept up AI experts, philosophers, and Google itself into a fierce debate about the current and possible future capabilities of machine learning, other ethical concerns around the tech, and even the nature of consciousness and sentience. The general consensus, it's worth noting, was that the AI is almost certainly not sentient.
Perhaps the most concrete break in the story, though, came again from Lemoine himself, who told Wired last week that LaMDA had hired an attorney — an intriguing development, because it seemed to have the potential to pull the saga out of the realm of the abstract and into the deliberate machinery of the courts, where lawyers do occasionally represent non-human entities.