Some technical writers in my network are genuinely worried about their professional future in the AI age. Will large language models take my job, they wonder. Are we going to be replaced by GPT, they ask in meetups and community forums. My short answer is “No”. My longer answer is “No, unless you reject the benefits of LLMs”. For my complete answer, keep reading this post.
By now, you should have some notion of what large language models are and how they work. LLMs are particularly powerful blenders of information. They don’t think, they lack motivation, and can’t act on their own. A fun way of picturing LLMs is imagining C3PO, the golden droid from Star Wars, sitting next to you, trying to be helpful when you ask it something. When it works, you want to pat its shoulder; when it doesn’t, it’s irritating. Whatever the outcome, calling it AI is a poetic license.
Given the current state of LLM technology, replacing human writers entirely with LLMs, or docs with chatbots, is not what I’d call a wise decision. It may be tempting in terms of cost cutting, especially if one falls into the trap of thinking that LLMs are sentient (they’re not). Instead of replacing writers with LLMs, a more promising direction is AI-augmentation and enriching human-made docs with AI-generated content where it makes sense. That’s a C/Fe society, in Asimov’s words: