Recently, OpenAI's new o3 model  scored fairly highly in a complex mathematics task. Some mathematicians are quite excited about this and want to purs

Technology, Environment, and Art

submited by
Style Pass
2024-12-23 23:30:03

Recently, OpenAI's new o3 model scored fairly highly in a complex mathematics task. Some mathematicians are quite excited about this and want to pursue AI-assisted mathematics to the edge. But you know, as a mathematician myself, I really don't understand the motivation that many mathematicians have at getting AI to do math.

To me it seems that one of the best aspects of mathematics is that it is a human endeavor that requires struggle. What would happen to the field in 50 years if AI gets so good that it can essentially do all research math extremely well and we just have to type in questions such as “classify finite simple groups”. What is the end value for us?

Even if AI will still need guidance, the idea of using it to do research by filling in large gaps of routine theorems with humans only needed to spot the most beautiful connections is contemptible. Moreover, after AI has mastered the entirety of a basic graduate school curriculum, I wonder if students themselves will feel the same magic in learning the material when an AI can do it so much better. So what is the end value?

Is it that we can now just browse through the most beautiful results and talk about them? Is it just for the sake of producing more research that still requires human help? I mean, what is the end goal of mathematics, after all? It makes no sense to transform mathematics into a product to be made on an assembly line.

Leave a Comment