Cloudflare, one of the world’s largest Internet infrastructure companies, has announced AI Labyrinth, a new tool to combat search engines that crawl websites without permission in search of data to train AI. In a blog post, the company says that when it detects “inappropriate bot behavior,” the free tool lures search engines with links to artificially intelligent decoy pages that “slow down, confuse, and waste resources” of those acting in bad faith.
Websites have long used the honor system approach of robots.txt, a text file that grants or denies permission to crawlers, but which AI companies, even well-known ones like Anthropic and Perplexity AI, have been accused of ignoring. Cloudflare writes that it sees more than 50 billion web crawler requests per day, and while it has tools to detect and block malicious ones, this often prompts attackers to change tactics in an “endless arms race.”
Cloudflare claims that instead of blocking bots, Labyrinth AI fights back by forcing them to process data that has nothing to do with the real data of a particular website. The company claims that it also functions as a “next-generation honeypot,” attracting AI scanners that follow links to fake pages deeper and deeper, whereas a normal human would not. This makes it easier to fingerprint malicious bots for Cloudflare’s list of bad actors, as well as to detect “new bot patterns and signatures” that it would not be able to detect otherwise. According to the report, these links should not be visible to visitors.









