Recently, two of our Solution Architects, Scott Reed and Jacob Pretorius, attended Cloudflare Connect in London. A busy event with over a thousand attendees from more than 50 countries. The day was packed with keynotes, breakout sessions, and a fireside panel to close it out.
Here's what stood out.
1. More than half of web traffic is now automated
This stat came up in almost every session and it deserves to. More than 50% of page requests that Cloudflare sees on their network are now coming from non-human sources. Bots already use websites more than humans do.
What makes this particularly interesting is that only about 7% of that automated traffic comes from bots that properly identify themselves. The other 93% is unknown - not necessarily malicious, but unknown. That's a pretty uncomfortable gap if you're responsible for a website where you need to manage exactly what bots get access to which content.
The shift changes the economics of the entire web. If you're a publisher relying on ad revenue or subscriptions, a bot scraping your content 16,000 times for every one referral it sends back is not exactly a fair deal. Those were real numbers shared during the bot management session.

2. Bot management is getting much smarter
Cloudflare's bot management team (who recently rebranded internally from "bots and fraud detection" to "web integrity and trust" - a deliberate shift) showed off some genuinely impressive new tooling.
The headline is that they're moving from tracking hundreds of "known good bots" to directly exposing thousands of bots across a "spectrum of trust". Not just the bots that self-identify, but proactively discovering new bots even when they don't introduce themselves. The goal is to move from a binary "good bot or bad bot" system to a nuanced trust spectrum - from evasive and malicious all the way up to verified and well-behaved.
They previewed a new analytics view that will give website owners opinionated insights, not just raw data. Things like "80% of publishers on the Cloudflare network are blocking this agent that you're not" or showing the crawl-to-referral ratio per bot operator so you can see exactly who is taking and who is giving back. This is the kind of data that actually helps you make decisions rather than drowning in dashboards.
On the mitigation side, they're building what they called an "escalation ladder" for dealing with bots - starting from serving LLM-generated summaries of your content (useful but not training-grade), scaling up to a maze of fake pages (AI Labyrinth), and going all the way to deliberately poisoning data sets with fingerprinted fake content so you can track if it resurfaces elsewhere. There's even a new action to introduce randomness into your responses to bots so they can't adapt to predictable patterns.

3. The Agents SDK is a serious platform play
They announced a huge upgrade this week as part of their Agents Week.
Their Agents SDK now supports voice agents in about 30 lines of code, efficient long-running agents with built-in state management, and sub-agents that you can spin off for tasks like deep research without stuffing your main context window. The new Think class extends the base Agent class and comes batteries-included - execution model with browser and sandbox access, scheduling, a file system, and full persistence out of the box.
What really caught my attention was the "execution ladder" concept. Instead of deciding upfront whether your agent needs a simple worker, a dynamic code evaluator, a browser, or a full container sandbox - you let the LLM escalate and de-escalate based on the task at hand.
The live demo of the Cloudflare MCP server was great. Their API has about 2,500 endpoints - way too much to put into an LLM context window. So instead of making individual MCP servers for each feature, they built one with two tools: Search and Execute. Two tool calls to list all your workers.
The Cloudflare CLI and MCPs are available now and could make the dashboard feel entirely optional. Very cool for those of us who spend most of our time in Claude Code anyway.
For the curious, the detailed blog post is at blog.cloudflare.com/project-think.
4. AI is a new attack surface and Cloudflare has tools for both sides
Running AI - whether it's a public-facing chatbot or internal tooling - creates attack vectors that didn't exist before. Prompt injection, sensitive data exfiltration, content moderation abuse, supply chain risks through RAG sources. The usual fun.
Cloudflare's approach is layered. For public-facing AI apps, their "AI Security for Apps" product (now GA) sits on top of the WAF engine and can detect PII in prompts, identify prompt injection attempts, flag unsafe topics across about 14-15 categories, and even let you define custom topic categories with plain language descriptions. The demo showed blocking a support chatbot from leaking customer data or responding to social engineering prompts, all with the same rules engine WAF customers already know.
One detail I liked: they do detection on the request before the LLM response streams back, so you're not paying tokens for attack traffic that gets blocked. Very Cloudflare philosophy - don't charge for mitigated traffic.
For internal AI usage, it's the zero trust story extended to AI. Shadow AI discovery, confidence scores for AI applications, DLP policies that work with MCP servers, and the ability to gate access through what they're calling MCP server portals - essentially a trusted app store for your organisation's MCP servers. They showed a demo where a developer trying to upload PII through a GitHub MCP server got automatically blocked by a policy.
The broader point from their CISO Grant was interesting - Cloudflare internally has 90 agents already built for things like SOC event triage, vulnerability management, identity verification, and access control validation. 91% of their R&D is using AI-assisted coding. They're very much building for themselves just as much as for us.

5. The ethics and law of AI agents is being written in real time
The closing fireside panel brought together a VC investor, two lawyers, a governance expert, and the CEO of an AI benchmarking firm. It was probably the session that will age the most interestingly.
The legal questions around agentic AI are genuinely uncharted. Where is the governing law when a contract is formed between two agents? Where does the agent "sit" - where the code is, where the server is, where the user is? If an agent acts on your behalf and books something you can't attend because it couldn't read a private calendar entry (GDPR said no), who's liable?
One panellist is working an active ransomware case where the attackers used AI for social engineering to bypass MFA - brilliant security on the company's side, but human error enabled by AI sophistication. This will become increasingly common.
The governance point that stuck with me was this: traditional governance is rules-based and static. Your checklist from last week is already outdated this week. AI governance has to be dynamic - evolving with the technology rather than trying to pin it down. When governance is dynamic, it stops being a blocker and becomes an enabler.
And then there was an advertisement shared for an agentic AI product marketed as "she outworks everyone and will never ask for a raise" with "no HR required". The panel's reaction was about as charitable as you'd expect. But it does highlight the very real tension between moving fast with AI and actually thinking about what kind of workforce - human and artificial - you're building and the culture your organisation will have for the humans that work there.
Someone asked if we need to pay pension contributions to AI agents. The lawyer said she'd have to get back to them on that one.
Wrapping up
Cloudflare is clearly positioning itself as the platform for the agentic internet - building agents, optimising sites for agent traffic, securing AI workloads, and giving website owners control over how their content is used. The pace of shipping is remarkable.
The event was well run and the content was genuinely useful. Not just marketing fluff but real product demos and honest conversations about problems that don't have neat answers yet. Also they had the best swag, who doesn’t love a soft hoodie.
If you want to learn more about how to make the most of Cloudflare for your website, get in touch.