Whether you’re an AI skeptic or an AI convert, you almost certainly understand how explosive its impact on the tech ecosystem has been, and how fast everything is moving right now.
I’ll keep my personal opinions about AI out of this blog (mostly), but I’ve been staying very familiar with the Model Context Protocol (MCP) for a few reasons. The primary one being that it seems absolutely terrifying in ways I can’t really comprehend.
Many of the security lessons we’ve learned over the years seem to have been overlooked in MCP’s rapid development. “Protect your data, it is what’s unique to you!” was the battle cry of every tech-oriented person for most of my career. I’m old enough to remember when Cambridge Analytica was raked over the coals because it weaponised our social media data. A few years later, we now seem quite content with the idea of letting Large Language Models (LLMs) slurp up mountains of our private data to train their word-guessers.
I’m being as glib as I always am on my blog, but in all seriousness, MCP has a problem—you want to get your data into an LLM, but you don’t want everyone else to be able to see it.