In the last couple of weeks, the word “superintelligence” has been quite trending. All thanks to Mark Zuckerberg, who poached some of the greatest AI minds from OpenAI, Anthropic, Google, Apple, and others to build his own Superintelligence lab at Meta.
The term was coined by Nick Bostrom in 1997 in the paper titled “How Long Before Superintelligence”. The paper discusses what superintelligence is, how it can be implemented, and what the possible hardware requirements could be to achieve it.
With each hardware generation delivering cheaper petaflops—and frontier models leaping benchmarks every quarter—the path to human‑level or beyond no longer feels speculative. Investors and researchers are acting accordingly.
For instance, Ilya Sutskever, Daniel Gross (who was poached by Mark Zuckerberg recently), and Daniel Levy founded their own company, Safe Superintelligence. Not only that, but already established AI startups such as OpenAI, Anthropic, XAI, and Google DeepMind have already shared their thoughts on “Superintelligence” and taken steps to build it.
In this article, I have explained the origin of AI Superintelligence, why it is an engineering problem, and how our products are being transformed as we march towards AI Superintelligence.