It feels strange to write this, but I think we're at a stage where we're trying to figure out what stands between us and something we could reasonably call artificial general intelligence (AGI).
Now, there isn’t a widely accepted definition of AGI. Many influential voices in the field have incentives to shape this to fit their own narratives. For instance, Microsoft risks losing access to OpenAI’s most advanced models if AGI is developed ( though this might get changed soon)
Here's an interesting way to think about AGI: compare it to a human worker. Maybe we've reached AGI when AI can do all the thinking parts of a real job. Not the physical parts - just the mental work.
Take translators. AI has basically replaced them already. ChatGPT can translate well enough that most people don't need to hire human translators anymore.
In this case, AI matches human intelligence for the core task. It can translate as well as a person can. Of course, this doesn’t mean that all translators are out of a job. Sometimes we hire translators for other reasons. With legal documents, we need someone to check the work and take responsibility if there are mistakes. This is about liability, not intelligence. We don't need AI to replace these trust-based parts of the job before we can call it AGI.