Over 2 years ago, the at-the-time not-for-profit research organisation OpenAI released a new version of their Large Language Model, GPT 3.5, under the

The LLM In The Room

submited by
Style Pass
2025-01-11 13:30:04

Over 2 years ago, the at-the-time not-for-profit research organisation OpenAI released a new version of their Large Language Model, GPT 3.5, under the friendlier brand name of ChatGPT, and started a media and market frenzy.

This was arguably the first time a chat interface could genuinely fool users into believing it was a person, and there was much talk about the age of “artificial general intelligence” and even “super-intelligence” now being upon us. Many pundits predicted the end of knowledge workers like lawyers, doctors, and – of course – software developers within a few years.

Naturally, this was a claim I had to check out for myself, so when GPT-4 was released a few months later, I signed up for the “Pro” version of ChatGPT to get (limited) access to it and started to experiment in various problem domains, including programming and software development.

Like millions of people, I was initially very impressed with GPT-4 (not so much with 3.5, I have to say). But as I started to try to actually do things – specific things – with it, its limitations became more and more apparent. While it is indeed remarkable that what is essentially a predictive texting engine can write Python or Java or C# that actually compiles – let’s not take that away from OpenAI – the actual code itself was less impressive.

Leave a Comment