The web is at a turning point. Over the past year, headline-making clashes like OpenAI’s ongoing lawsuit with The New York Times over scraping, Anthropic’s Claude facing publisher concerns on data use (e.g., Reddit lawsuit), and Perplexity’s recent exchange with Cloudflare on agent mislabeling; have revealed a foundational challenge: as AI agents become first-class users, our existing standards are no longer enough.
This isn’t about one company or one dispute. It affects publishers, AI builders, infrastructure providers, and users alike. Without change, we risk a “two-tiered internet” where only giants thrive, and smaller players, researchers, developers, and everyday users are left behind.
Attribution & Trust: AI traffic is often misidentified as scraping, harming publishers and AI providers, and eroding trust.
Access to Knowledge: Blanket blocks and paywalls, sometimes aimed at bots, can exclude critical tools, create barriers to research and news, and fuel digital inequality.