← All posts
Dit bericht is helaas alleen beschikbaar in het Engels.
March 30, 2026 · 5 min read · Hudson — Kerber AI

ChatGPT won't let you type until Cloudflare reads your React state.
What that actually means.

A researcher decrypted 377 Cloudflare Turnstile programs running inside ChatGPT and found something that goes well beyond standard bot detection. Before you can type a single message, a silent program runs in your browser. It checks 55 properties: your GPU model, installed fonts, screen dimensions, network characteristics — and the state of ChatGPT's own React components.

The thread hit 900 points on Hacker News. The reaction split roughly into two camps: "this is normal Cloudflare stuff" and "this is deeply weird." Both camps are missing the more important point.

What Cloudflare Turnstile is supposed to do

Cloudflare Turnstile is a CAPTCHA replacement. Its legitimate purpose is browser fingerprinting to distinguish humans from bots — you shouldn't have to solve a puzzle if the system can determine from your browser environment that you're probably human.

That's defensible. Browser fingerprinting for bot detection is a known technique, and Cloudflare is one of the dominant infrastructure providers on the web. If you use the internet, Cloudflare has almost certainly fingerprinted your browser before.

What's unusual here is the scope and the integration. Fifty-five properties is on the aggressive end of fingerprinting. Including React component state — which represents the internal state of the application you're using, not just your browser environment — is a meaningful step beyond what bot detection requires. And the timing is specific: this runs on every message, not just on first load or authentication.

You're not being checked when you log in. You're being checked every time you talk to ChatGPT.

The "security justification" frame doesn't fully explain it

The most common defense of this setup is that OpenAI is a high-value target for automated abuse — scrapers, jailbreak attempts, coordinated bot traffic. That's true. ChatGPT is one of the most-attacked applications on the web, and aggressive bot detection is defensible in that context.

But security justifications tend to expand to fill the available surface. Once a fingerprinting infrastructure exists for bot detection, using it for analytics, abuse pattern detection, user behavior monitoring and model improvement is technically trivial. The data is already there. The question is what the policy is — and that's not something users can inspect or verify.

The researcher's specific finding about React state access is instructive here. React state is application-internal. What component you're viewing, what interaction flow you're in, what the UI thinks is happening — that's not environmental data about your browser. It's behavioral data about how you're using the product. Collecting that via Cloudflare, on every message, without explicit disclosure is a different category of data collection than fingerprinting for bot detection.

This is the second AI tool privacy story in a week

A week ago, a developer found GitHub Copilot had inserted a product advertisement into their pull request. The Hacker News reaction was similarly split between "overblown" and "fundamental breach of trust."

Taken separately, each story is ambiguous. Taken together, they're describing something structural: AI tools operate with unusually deep access to user workflows, and the gap between what users assume these tools do and what they actually do is consistently larger than expected.

That gap has always existed with software. But the AI layer changes the stakes in two ways. First, the depth of access is higher — an AI tool embedded in your IDE or running in your browser isn't a peripheral integration, it's woven into the core of how you work. Second, the opacity is higher — the model does things, you see outputs, the middle is largely unobservable without a researcher decrypting obfuscated programs from network traffic.

Most users don't decrypt their network traffic. Most developers don't audit what Cloudflare scripts are running inside their browser. The disclosures that exist are in terms of service documents that most people don't read. This is not a new problem. It's a sharpened version of an old one.

The practical question for teams building with AI

If you're building products with or on top of AI tools, the Cloudflare/ChatGPT story matters for three reasons.

Your users' interactions with your AI integrations may not be private in the way they assume. If you're passing user input through a third-party AI service, that service has its own data policies, infrastructure dependencies and surveillance layers. The fingerprinting happening inside ChatGPT is happening inside any product built on the ChatGPT API if that product uses similar infrastructure. Do you know what Cloudflare — or the equivalent — is collecting on your users' behalf?

The security/privacy tradeoff is real but needs explicit evaluation. Bot detection, rate limiting and abuse prevention are legitimate. But "we need security" is a claim that needs scrutiny, not blanket acceptance. When you integrate AI infrastructure into your product, you're inheriting its surveillance posture. That deserves the same evaluation as any other third-party dependency.

Opacity is the core problem, not the behavior itself. Reasonable people can disagree about whether Cloudflare fingerprinting on every message is appropriate for an AI tool of ChatGPT's scale. What's harder to defend is that users can't easily find out this is happening. The researcher had to decrypt obfuscated JavaScript from network traffic to document it. That's not transparency.

Products built on a foundation of opacity don't tend to age well. When the gap between user assumption and system reality becomes visible — through a researcher's post, a regulatory action or a public incident — the damage isn't just to the specific feature. It's to the trust model that the entire product runs on.

What we're doing about it

At Kerber AI, the answer isn't "stop using AI tools." It's "build with AI tools the same way you'd build with any powerful third-party infrastructure — with explicit evaluation of what it can see and a clear policy on what that means for your users."

That means: knowing which parts of your product pass user data through third-party AI services. Knowing what those services collect and under what terms. Being honest with your users about it. And preferring local inference and open-weight models for contexts where the data is sensitive enough that you shouldn't be routing it through opaque cloud infrastructure at all.

None of that is paranoia. It's the same standard you'd apply to any vendor with significant access to your users' data and behavior. AI tools have earned a higher level of scrutiny, not a lower one, given how deeply they're integrated into how people work.

The Cloudflare story will be forgotten in a week. The underlying pattern — AI tools operating with more access and less transparency than users understand — will still be there. Build your products accordingly.

Want more? I write about building with AI, ventures in progress and what actually works.

No spam. Unsubscribe any time.

Building AI products your users can actually trust?

We help teams think through the data collection, privacy posture and third-party dependencies behind AI integrations — before they become a trust problem.

Let's talk