With the explosion of computer use agents like Claude Computer Use and search wars between Perplexity, SearchGPT, and Gemini — it seems inevitable that AI will change the way we access information. So far, we’ve been obsessed with building agents that are better at understanding the world. But at this point, it’s important to realize that the world adapts itself to LMs, too. Codebases, websites, and documents were made for humans, but they start to look different once LMs become “users,” too.
In this post I’ll explore some emerging signs of what this future might look like – how is the world adapting already, and what might the Internet like at the end of it all? As a researcher or developer, this poses some interesting questions. If we realize that the digital world is fundamentally malleable, the line between “agent” and “environment” starts to blur. Instead of building better models, what end-to-end systems should we be building instead?
When I code side-by-side with GitHub Copilot, I often notice how my behavior changes in subtle ways to adapt to the tool. Code autocomplete naturally lends itself to “docstring-first programming” — you’re much more likely to get a helpful code snippet if the model has some description of what you’re trying to accomplish first. So naturally, you might type in a comment or a descriptive name first, and then pause to see if you get the right snippet back: