Are LLMs (Large Language Models) smarter than house cats? Turing Award winner Yann LeCun recently answered this question with a resounding “no” in

Is AI smarter than a house cat? - by Nabeel S. Qureshi

submited by
Style Pass
2024-04-18 20:00:05

Are LLMs (Large Language Models) smarter than house cats? Turing Award winner Yann LeCun recently answered this question with a resounding “no” in his testimony to Congress:

“ A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning—actually much better than the biggest LLMs.”

Which group is correct remains an open question, but we can learn a lot from examining their arguments. If scaled up LLMs do not reach the creativity of human scientists or mathematicians, then we should look skeptically at promises that research will be done by AIs soon, and we should be skeptical of arguments that AI will “replace” human employees. In the LLM-skeptic version of the world, current AIs will be closer to a human-augmenting tool that makes people more productive, not some kind of super-human agent; and the world would change much less than many currently think.

LLMs are trained on data from written texts — the internet, books, and other such data. Common sense would suggest that while you can learn an astonishing amount from such sources, there are many things that LLMs cannot learn. Navigating the physical world in unknown situations is one example: Steve Wozniak’s famous AGI test is “go into a kitchen, without prior knowledge, and figure out how to make a cup of coffee.” Being able to operate in the real world involves tacit knowledge, and not all of those gaps can be plugged by written text.

Leave a Comment