One Firebase Misconfig Leaked 300M Chat Messages
Field Guide
One Firebase Misconfig Leaked 300M Chat Messages
An AI chat app with 50M users left a Firebase database open. A researcher found 300 million messages from 25 million people.
Key takeaway
Firebase misconfiguration is systemic. 72% of Android AI apps ship with hardcoded secrets. 196 out of 198 iOS AI apps had Firebase security rule failures.
Key takeaway
300 million messages is the symptom. The disease: security gets treated as an afterthought bolted on after launch, not a requirement for shipping.
Key takeaway
If you're building with Firebase, Cloud Storage, or Supabase, your security rules right now determine whether a researcher finds your data first or a criminal does.
The Breach
A researcher discovered an exposed Firebase database belonging to Chat & Ask AI in late January 2026. The app had 50 million active users. The database was open. No authentication required. Anyone could read it.
Inside: 300 million chat messages from 25 million users. Complete conversation histories. Model preferences. User settings. Everything.
The researcher alerted the company on January 20. Codeway, the developer, fixed it within hours. But those hours matter. Someone else could have found it in those same hours. Probably did.
What This Means (Answer-First Summary)
Chat & Ask AI exposed 300 million messages through a misconfigured Firebase database with public read/write access. At least 20 similar breaches followed the same pattern since January 2025. Research found 72% of Android AI apps contain hardcoded secrets. The root cause: shipping velocity without security verification.
The Pattern Keeps Repeating
Malwarebytes reported the Chat & Ask AI incident in February. Then Cybernews published research on three AI photo ID apps—Dog Breed Identifier Photo Cam, Spider Identifier App by Photo, and Insect Identifier by Photo Cam—all leaking user photos, documents, and GPS coordinates via the same Firebase misconfiguration. Over 150,000 users affected. These apps sat in app stores with millions of downloads.
In the same month, Bondu AI, a children’s AI dinosaur plushie, left its backend console completely open. Not hidden. Just open. Two researchers logged in with arbitrary Gmail accounts and found 50,000 chat transcripts between children and the AI. Names. Birthdates. Family details. Conversations about homework. Fears. Everything.
This is the pattern: Firebase misconfiguration equals exposed database equals private user data in the wild.
Barrack.ai documented 20 incidents since January 2025 with identical root causes. The only variable was the company and the type of data leaked. The mechanism stayed the same. Every single time.
Here’s Why This Happens
Firebase Security Rules have three default settings. The dangerous one looks like this:
{
"rules": {
".read": true,
".write": true
}
}
That’s it. Anyone reads. Anyone writes. Delete your data. Modify it. Copy it.
Developers working at startup velocity don’t think about security rules during week two of shipping. They think about shipping. Security rules are a configuration step that feels optional because the app “works” without them. It loads. It persists data. It’s functional. So you push to production.
Then six months later, someone runs a scanner across 200 iOS apps looking for this exact pattern. Finds 103 of them broken. Alerts the companies. Becomes a news story.
This is the death of a thousand cuts. Not one bad actor. Not sophisticated hacking. Just the compounding effect of shipping without verification.
The Technical Anatomy
Firebase is designed for rapid development. That’s the feature. Realtime databases. Cloud functions. Built-in auth. Ship in days instead of weeks.
The trade-off: defaults matter.
When you initialize a Firebase project, your security rules need explicit definition. But the burden falls on the developer to know this matters. To know the default is dangerous. To read documentation on day three of building instead of day 300.
Even developers who understand security can miss this. They’re focused on feature parity. “Does the chat work?” Yes. “Can users see their messages?” Yes. “Can an unauthenticated person with the database URL read every message ever stored?” That question doesn’t get asked because the consequence feels theoretical until it’s not.
At Chat & Ask AI’s scale—50 million users—the consequence becomes 300 million messages in plaintext.
Row-level security in Supabase has the same pattern. Hardcoded API keys in Android apps get committed to version control then baked into millions of app installs. These aren’t mysteries. They’re known problems. But known problems that still happen.
The Parallel: Trust Without Verification
This maps to how people operate too.
You trust your doctor. You don’t ask to see their credentials on the wall every visit. You assume they’re verified. The system did the verification once, at licensing. Then you extend trust based on that.
But assume your doctor never renewed their license. Assume nobody checked. Assume they just kept showing up and people kept trusting them because the assumption held.
That’s Firebase misconfiguration at scale. The system shipped. Users trusted it. Nobody verified the security rules because trust was implicit in the product existing.
The difference: when your doctor’s credentials lapse, one practice gets sued. When your Firebase rules default to public, 25 million people’s conversations get leaked.
The solution is the same, just more urgent at scale. Verification before trust. Every time. No assumptions.
What You Do Monday Morning
If you’re building on Firebase, Cloud Storage, Supabase, or any cloud backend, security rules aren’t a checkbox. They’re the foundation.
Do this:
- Open your database’s security rules right now. Not later. Now.
- Check if
.read: trueor.write: trueexists at any level. If it does, your data is public. - Define explicit rules for each resource. Authenticated users only. Specific fields only. Row-level security verified.
- If you have hardcoded API keys in your codebase, rotate them. Now. Every one. Then use environment variables.
- Scan your app’s dependencies. Escape found 5,600 vibe-coded applications with exposed cloud endpoints. You might be one of them.
Test this. Have someone without account credentials try to read your data from your database URL directly. If they can access anything, you have the Chat & Ask AI problem.
The researcher who found Chat & Ask AI’s breach ran a scanner. Found 103 out of 200 iOS apps broken the same way in one scan. He didn’t need sophisticated hacking tools. Just the obvious question: are your security rules actually configured?
The Thread Forward
This is the evidence: the AI app ecosystem has an architecture problem masquerading as a security problem.
Next post answers the question every engineer building on cloud platforms should ask: What does the government actually expect you to do? NIST released updated guidance on AI system security in 2026. And it’s not vague.
Sources
- AI chat app leak exposes 300 million messages tied to 25 million users | Malwarebytes
- AI Photo ID apps leak sensitive GPS data for millions of users | Cybernews
- An AI plush toy exposed thousands of private chats with children | Malwarebytes
- Every AI App Data Breach Since January 2025: 20 Incidents, Same Root Causes | Barrack.ai
Join the Intelligence Brief
Threat intelligence, agentic vulnerabilities, and engineering frameworks delivered straight to your inbox.