The futility of human-only web requirements

# June 1, 2025

There have long been only three abstraction layers to the Web:

  • A visual user interface for the average user
  • Dom markup (aria tags) for accessibility software
  • APIs for machine code

Into this mix comes a fourth. AI Agents are coming for research, they're coming for job hunting, they're even coming for shopping.

We're still locked into a mental model that all bots on the web are bad. They DDoS sites, they scrape data, or they facilitate scalping limited inventory and driving prices higher. Most webmasters tolerate search engine bots because of the additional traffic from being listed in the search results. Without that benefits, I'm inclined to say they would hate all bots categorically.

They sure try to ban them already.

  • TicketMaster's response to ticket scalping? Try to ban the bots.
  • Reddit's response to authenticity? Try to ban the bots.
  • Yelp's response to synthetic reviews? Try to ban the bots.

The list goes on. In response to anti-bot sentiment, a huge secondary industry has emerged to try and protect websites: ReCaptcha, Cloudflare, Fingerprint.js. These anti-bot defenses all rely on detecting the specific markers that make a bot non-human: the speed of navigation, being run in a headless context, only navigating via key commands. If you have a dataset that's big enough it's actually pretty easy to build a model that separate humans from bots. They're usually linearly separable interaction patterns.2

As more AI Agents come online, I remain convinced that we're thinking about this problem wrong. The main thing that product designers should care about is how their end users engage with what they're selling. Ads distorted this model by letting companies "sell" eyeballs - so unless people are actually engaging with their site themselves, they're losing out.

But - dare I say - most companies have value that their direct customers would actually be willing to pay for. They should be asking themselves how real people1 can actually benefit from the use of their site and products, whether or not there's an AI Agent being used as an intermediary.

Business model is the key question. And one question removed is how to execute that business model via code.

Anti-bot protections can't help your bad business models

Take Ticketmaster. Are bots waiting in a queue the problem? Not really. The problem is they'll let anyone with a credit card sign up and buy a (typically) unlimited amount of tickets. So if you're faster to the initial page or can spam a queue with your own delegates, you're necessarily at an advantage. The only way to really combat that? Make people do KYC validation of their drivers license before they can get into the queue in the first place. One ID is one entry into the ticket lottery. That's it.

Take Reddit. Are bots engaging or posting content the issue? It definitely impacts some implicit notion of "authenticity" - but if a bot posts a fire meme versus an anon, on some level who cares. What people really care about are lies about personal experience. If there's a collection of strong recommendations for a restaurant no one has gone to, the value of communities as a vetted space evaporates. Blue-checkmark reddit profiles. 95% of the problem is solved. And then if I use a delegated bot to farm engagement for my account, I have to personally stand behind everything it says.3

We're trying so hard to combat bots. But at what cost? And at what value?

Caught in the collateral

Here's the thing about false positives: they're not edge cases anymore. As bots get better at evading detection, anti-bot detectors have to up their sensitivity and necessarily catch more people in the cross hairs.

The irony is thick. In trying to preserve the "human" web, we've made it hostile to actual humans who don't browse in the exact prescribed manner. It's like a restaurant installing a door so narrow that only average-sized people can enter, then claiming they're protecting the dining experience.

And for what? The serious bad actors - the scalpers, the data thieves, the DDoS networks - they've already moved past your defenses. They're running full browser instances on residential proxies with ML-generated mouse movements that are more "human" than most humans. Your CAPTCHA isn't stopping them. It's stopping my grandmother from buying concert tickets.

The arms race nobody wins

Every new anti-bot measure follows a predictable lifecycle:

  1. Company deploys new detection method (browser fingerprinting!)
  2. Bot developers reverse engineer it (spoofed fingerprints!)
  3. Detection gets more aggressive (track mouse movements!)
  4. Bots get more sophisticated (AI-generated movements!)
  5. Real users suffer (why is this site so slow?)
  6. Repeat until bankruptcy or acquisition

We've created an entire shadow economy of bot developers, anti-bot vendors, and consultants all feeding off this dysfunction. Cloudflare makes bank. Scraping-as-a-service startups proliferate. Meanwhile, the actual problem - preventing abuse while serving customers - remains unsolved.

The truth nobody wants to admit? The bot developers are usually smarter and more motivated than the defenders. They have clear economic incentives. They iterate faster. And they only need to win once, while defenders need to win every time.

If you're not good at a game, the obvious advice is train harder and become better. Train until you win. Maybe here we should just pick up our ball and go home.

When eyeballs stop mattering

The real crisis isn't bots. It's that the entire attention economy is built on a lie. We've been monetizing engagement as if a pageview from a bot, a misclick from a human, and genuine interest are equivalent. They're not, and AI agents are about to make this painfully obvious.

When every user has an AI assistant pre-screening content, what happens to your carefully crafted engagement loops? When agents can skip your ads, ignore your dark patterns, and extract exactly what the user wants? The jig is up. You've just got to build valuable shit and get it to people (and machines) in a way that people are willing to pay for.

The web is about the change dramatically. It's about to kill some businesses and reinvent others. Minimally it's going to force them back to the brainstorming table:

  • What actual value do we provide?
  • Would someone pay for this if we couldn't force them to watch ads?
  • How do we serve the human at the end of the agent chain?

The answer isn't to fight the agents. It's to build something worth paying for, whether the payment comes from a human or a digital representative.

Or we could just legislate this...

Or maybe ads get their way. Maybe they say human eyeballs need to be here to stay. If so I'm guessing they're going to turn to regulation with some teeth.

The TCPA didn't try to detect whether a call was automated - it just made unsolicited automated calls illegal. Charge up to $1,500 per violation and now there's some money on the table. Guess what? The spam dropped dramatically. Not because it became technically impossible, but because most actors didn't want to risk massive fines.

We could do the same for AI-generated content:

  • Require disclosure when content is AI-generated
  • Ban unsolicited AI outreach without explicit consent
  • Set statutory damages high enough that violation isn't worth it

The beauty of this approach? It sidesteps the entire technical arms race. You don't need perfect bot detection if running an undisclosed bot risks a $10,000 fine per post. Most bad actors are rational economic players - make the penalties exceed the profits and watch the behavior change.

Sure, some offshore scammers won't care. But the vast majority of bot activity comes from businesses trying to game the system while maintaining plausible legitimacy. They'll comply because they have assets to lose and reputations to protect.

Either way, whether business models change or laws evolve, the human-only web as we know it is dead. I think it's high time to just admit that and stop the perennial cat and mouse. The mouse won. Go home Tom.


  1. Perhaps at the end of some lengthy agent value chain. 

  2. And yes, you can train models to emulate humans. I've done a lot of that. But 99% of bots are too lazy to try so you can filter them out without much trouble. And even the best simulation models are limited by your feature choice and dataset. With a big enough ground truth (where centralized anti-bot platforms are necessarily at an advantage) you can probably build an even better detection model. It's a real world GAN in action. 

  3. One of the reasons why I use my full profile on all social is because I want my identity to stand behind my claims. 

Related tags:
#ai-agents #web-automation #product-design #business-models #bots

👋🏼

Hey, I'm Pierce! I write mostly about engineering, machine learning, and building companies. If you want to get updated about longer essays, subscribe here.

I hate spam so I keep these infrequent - once or twice a month maximum. Promise.

Unfortunately SEO still matters
Everyone's saying SEO is done and dusted. It's over. GEO (generative engine optimization) is the name of the game now. Or maybe companies should stop writing bad quality stuff.