Talking About Large Language Models

submited by
Style Pass
2023-01-23 14:00:01

Thanks to rapid progress in artificial intelligence, we have entered an era when technology and philosophy intersect in interesting ways. Sitting squarely at the centre of this intersection are large language models (LLMs). The more adept LLMs become at mimicking human language, the more vulnerable we become to anthropomorphism, to seeing the systems in which they are embedded as more human-like than they really are. This trend is amplified by the natural tendency to use philosophically loaded terms, such as “knows”, “believes”, and “thinks”, when describing these systems. To mitigate this trend, this paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work. The hope is that increased scientific precision will encourage more philosophical nuance in the discourse around artificial intelligence, both within the field and in the public sphere.

The advent of large language models (LLMs) such as Bert (Devlin et al. , 2018) and GPT-2 (Radford et al. , 2019) was a game-changer for artificial intelligence. Based on transformer architectures (Vaswani et al. , 2017), comprising hundreds of billions of parameters, and trained on hundreds of terrabytes of textual data, their contemporary successors such as GPT-3 (Brown et al. , 2020), Gopher (Rae et al. , 2021), and PaLM (Chowdhery et al. , 2022) have given new meaning to the phrase “unreasonable effectiveness of data” (Halevy et al. , 2009).

Leave a Comment