Post-GPT Computing | Grady Simon

submited by
Style Pass
2023-03-24 13:00:08

Yesterday, I watched someone upload a video file to a chat app, ask a language model “Can you extract the first 5 s of the video?”, and then wait as the language model wrote a few lines of code and then actually executed that code, resulting in a downloadable video file. Oh, and writing and executing code is only one of the many new capabilities that can be seamlessly stitched together by the language model.

I couldn’t keep working. I had to leave the office and go for a walk. Is software engineering basically a solved problem now? Did OpenAI just make the last application? This all sounds hyperbolic and melodramatic when I write it out, but I’m not the only one who felt something like this. Twitter showed me I wasn’t alone:

This is what I had always been working toward. I’ve been working on applying machine learning to natural language my entire career, since 2013. Natural language is the most flexible interface we have to other humans, and we should figure out how to bring that flexiblity to our computer interfaces as well. I could think of few things so empowering and enriching to humanity as being able to orchestrate the computers of the world as easily as we can speak and weave together sentences.

For some reason my reaction to this announcement wasn’t the elation of watching humanity reach a pinacle accomplishment like it should have been. It was more like vertigo. Why was that? I thought about it on my walk. Maybe I’m just jealous that I didn’t help build it? Honestly, I think that is part of it. It’s pretty damn cool, and I would be super excited if I were on the team building it (at least if I managed to tune out the part of me that would be freaking out about the fact that we were connecting an unaligned quasi-AGI to the open Internet).

Leave a Comment