My take was: unless you’re looking to build a foundational model, it’s more productive to think of AI as a smart, cheap, always available, human behind an API that you can command to do things.
Of course, when you actually try and implement what you want, you’ll face limitations. That’s the time to obsess over how to bridge the gap between what you want and what is: whether that’s through better prompting, RAG, fine-tuning, porting over to a cheaper, open source LLM, etc. You’ll likely be able to find a starting point that is still useful despite not being at the level you imagined.
Optimizing when you have a clear goal for what you’re trying to achieve is a lot more efficient than just “learning” about LLMs without knowing what you want to do with it.
There are too many people riding the AI hype train who know about all the techniques to optimize LLMs without good ideas for what to make LLMs do. Meanwhile, some of the most interesting AI products were started by people who were oblivious about the limitations of LLMs when they were getting started. Their ignorance allowed them to dream bigger and be more ambitious.