In the not-so-distant future, an important convergence will occur: the smartest AI agent, navigating the web at the behest of some programmer, will achieve the appearance and capabilities of the simplest human on the internet.
This AI agent’s initial, then subsequent brushes with human-level intelligence will present a fundamental challenge to anti-bot technologies, which are critical to maintain the security of the web. This convergence will force engineers to rethink the purpose of CAPTCHA-like user experiences, lest AI agents run rampant and humans find themselves locked out of their favorite sites.
Anti-botting is an important layer of the internet, and a massive industry exists to help web developers answer the same question: Is the person trying to access my site a human or a bot? In this case, a bot is a computer program that executes code intended to mimic the appearance and capabilities of a human: look there, click here, “don’t mind me; I’m human!”
Bots can be used maliciously: a website may experience a period of rapid, inflated traffic, most commonly via a Denial of Service (DoS) attack, that overwhelms site resources others may wish to consume. In some cases, DoS attacks can bring down the site altogether, preventing anyone from using the service. This could be accomplished with a well coordinated community of humans. But why subject yourself to this headache when the same could be accomplished with a simple computer program that acts as thousands of humans, in the form of bots. If bots can be used to cheaply and scalably diminish the performance of a website, provisions for detecting and rejecting bots are clearly needed.