A big realization I have had with AI coding assistants over the past few weeks is that using LLM-powered chat to solve non-trivial engineering problem

Complexity is overrated

submited by
Style Pass
2024-11-01 15:30:02

A big realization I have had with AI coding assistants over the past few weeks is that using LLM-powered chat to solve non-trivial engineering problems involves a slight “trust fall.” I have seen this based on my usage and conversations with Cody customers.

My definition of a non-trivial problem is something that you almost avoid solving because of its complexity, your lack of knowledge, or the tedium of the process. All these things add up in your head, preventing progress.

This problem for me (for too long) has been writing UI code. I don't like doing it. It feels overwhelming. When pushed, I would copy/paste some scaffolding from Google and then do the bare minimum tinkering to get it working.

For a professional developer or a team, this non-trivial problem could be something else—modernizing a legacy codebase that no one understands or wants to touch, migrating to a new language, framework, or architecture, or adding new functionality to a complex codebase.

AI-enabled coding assistants promise to help with this. But the reality is that it does NOT feel natural to ask long, complex questions in an AI chat box. Initially, most people almost don't believe that it will work. They give themselves a number of reasons why it is not possible for a software system to make sense of all this complex input and do anything meaningful with it. Even Neo didn’t believe it at first. But then he did. What changed?

Leave a Comment