In the last Substack I introduced the idea of “soft software” where natural language becomes executable “code” running on the LLM platform. We

Semiotic bricolage - johan’s substack

submited by
Style Pass
2024-07-11 05:30:03

In the last Substack I introduced the idea of “soft software” where natural language becomes executable “code” running on the LLM platform. We touched on the role of intent in such interaction and how intent is bound up in modeling the “mind” of the LLM to the extent that’s possible, knowing that it’s an unknowable, alien other. Thus we might think of a piece of soft software as “steering the language model” with an intent, yet where the model is highly non-linear, non-deterministic, and black-boxed. How does this compare to human-human communication?

In ordinary communication we also speak with a mental model of the other. These models are always partial—we can never fully predict the effect an utterance will have, especially when it’s broadcast into a more diffuse social milieu (like posting on a social media platform or broadcasting a radio play). What of AI-human communication? Does the model itself bring some kind of intent to bear on its interactions? Does the AI have a mental model of its interlocutors? If so, what might this look like? Scientists have already observed that theory of mind may have “spontaneously emerged as a byproduct of LLMs' improving language skills.”

So let’s test it out. In an ongoing conversation with Claude, pertaining to the last discussion of soft software, I ask it to give an account of its theory of mind for me (its interlocutor):

Leave a Comment