Last weekend, Blake Lemoine, a Google engineer, was suspended by Google for disclosing a series of conversations he had with LaMDA, Google’s impress

“Sentience” is the Wrong Question

submited by
Style Pass
2022-06-21 15:00:05

Last weekend, Blake Lemoine, a Google engineer, was suspended by Google for disclosing a series of conversations he had with LaMDA, Google’s impressive large model, in violation of his NDA. Lemoine’s claim that LaMDA has achieved “sentience” was widely publicized–and criticized–by almost every AI expert. And it’s only two weeks after Nando deFreitas, tweeting about DeepMind’s new Gato model, claimed that artificial general intelligence is only a matter of scale. I’m with the experts; I think Lemoine was taken in by his own willingness to believe, and I believe DeFreitas is wrong about general intelligence. But I also think that “sentience” and “general intelligence” aren’t the questions we ought to be discussing.

The latest generation of models is good enough to convince some people that they are intelligent, and whether or not those people are deluding themselves is beside the point. What we should be talking about is what responsibility the researchers building those models have to the general public. I recognize Google’s right to require employees to sign an NDA; but when a technology has implications as potentially far-reaching as general intelligence, are they right to keep it under wraps?  Or, looking at the question from the other direction, will developing that technology in public breed misconceptions and panic where none is warranted?

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Leave a Comment