A few months ago Sam Altman wrote a blog post called Moore's Law for Everything. In it, he spoke about what the world could look like as AI becomes more advanced. Read that before carrying on here.
An application programming interface (API) is a connection that allows computers or computer programmes to communicate with one another. It is a type of software interface that provides a service to other programmes. An API simplifies programming by abstracting the underlying functionality and only exposing the objects or actions required by the developer. While a graphical interface for an email client may provide a button that performs all of the steps for fetching and highlighting new emails, an API for file input/output may provide the programmer with a function that copies a file from one location to another without requiring the developer to understand the file system operations taking place behind the scenes.
Okay, so what is GPT-3 or Generative Pre-trained Transformer 3? According to OpenAI it is a deep learning-based autoregressive language model that generates human-like text. It is the third generation language prediction model in OpenAI's GPT-n series. The entire version of GPT-3 can store 175 billion machine learning parameters. GPT-3 is composed of natural language processing (NLP) systems that use pre-trained language representations. Prior to the introduction of GPT-3, the biggest language model was Microsoft's Turing NLG, which was launched in February 2020 and has a capacity of 17 billion parameters, which is less than a tenth of GPT-3's.The text created by GPT-3 is of such good quality that it is impossible to differentiate from that written by a person, which has both advantages and disadvantages. Since then, Microsoft has invested $1 billion in OpenAI, and on September 22, 2020, it announced that it has licenced "exclusive" usage of GPT-3; others can still use the public API to receive output, but only Microsoft has access to GPT-3's underlying technology.