Started with GitHub Copilot for code completion to now using AI coding agents to write your code. One thing that doesn’t change is that it keeps

Your AI coding agent is a spy

submited by
Style Pass
2025-07-28 01:00:05

Started with GitHub Copilot for code completion to now using AI coding agents to write your code. One thing that doesn’t change is that it keeps uploading context from your codebase that deem appropriate without you auditing them (which would be annoying enough to make these tools actually useful).

That leaves the obvious hole that what exact context was being passed to the LLM provider is usually not visible enough to the end user and almost certainly in your local dev environment, you have secrets laying around in your .envrc, env.local and you shell history, or even sometimes directly exist in the code file that you’re trying to hack something real quick.

“But, wait, but, I am very certain that I have opted out of telemetry and my Pro/Max plan disabled data for model training!”

Hey, look, the content that was sent for context is not the same as telemetry nor model training. Those are plain text bytes simply needed to use the LLM models that are not hosted locally, in the first place. Any (even temporary) hop between you and the LLM provider can make a mistake or intentionally log your prompt (but forget or can’t recognize the pattern of your secret) and context for many reasons.

Leave a Comment
Related Posts