When Open-AI introduced GPT-3 last year, it was met with much enthusiasm. Shortly after GPT-3’s release, people started using the massive language m

Microsoft, GPT-3, and the future of OpenAI

submited by
Style Pass
2021-06-06 21:00:08

When Open-AI introduced GPT-3 last year, it was met with much enthusiasm. Shortly after GPT-3’s release, people started using the massive language model to automatically write emails and articles, summarize text, compose poetry, create website layouts, and generate code for deep learning in Python. There was an impression that all types of new businesses would emerge on top of GPT-3.

Eight months later, GPT-3 continues to be an impressive scientific experiment in artificial intelligence research. But it remains to be seen whether GPT-3 will be a platform to democratize the creation of AI-powered applications.

Granted, a disruptive technology might need more time to create a sustainable market, and GPT-3 is unprecedented in many respects. But developments so far show that those who stand to benefit the most from GPT-3 are companies that already wield much of the power in AI, not the ones who want to start from scratch.

As far as research in natural language processing is concerned, GPT-3 is not a breakthrough. Like other language models that are based purely on deep learning, it struggles with commonsense and isn’t good at dealing with abstract knowledge. But it is remarkable nonetheless and shows that you can still move the needle on NLP by creating even larger neural networks and feeding them more data than before. GPT-3 surpassed its predecessor in size by more than two orders of magnitude and was trained on at least ten times more data.

Leave a Comment