Logan Kelly

Lovable's 48-Day Silent Breach Shows Why AI Platforms Need Audit Trails, Not Just Bug Bounties

Lovable's 48-Day Silent Breach Shows Why AI Platforms Need Audit Trails, Not Just Bug Bounties

Lovable's BOLA flaw sat open for 48 days before disclosure. Here's what AI platform teams get wrong about compliance — and how runtime audit trails fix it.

Waxell blog cover: AI platform audit trail compliance

A security researcher found that anyone with a free Lovable account could read the source code, database credentials, and AI conversation history from projects built by the platform's millions of users. The flaw had been sitting there, reportedly, for at least 48 days. The researcher had submitted it through HackerOne. It was closed as a duplicate and left open.

When the story broke on April 20, Lovable's initial response was to call it "intentional behavior."

An AI platform audit trail is an immutable, durable record of every access event across a system — who requested what data, whether they were authorized to have it, and when it happened. When this kind of record is enforced at runtime, unauthorized cross-tenant access creates a detectable anomaly the moment it occurs — not 48 days later when a researcher goes public.

This is not primarily a story about a bad BOLA implementation. It's a story about what happens when an AI platform has no compliance infrastructure — no mechanism to detect that a disclosed vulnerability was being actively exploited, no audit record of who accessed what, and no disclosure process that could fire when the bug bounty channel failed. The security flaw created the exposure. The absence of a runtime audit trail let it persist undetected for over six weeks.

That distinction matters a great deal to anyone running production AI systems right now.

What Actually Happened at Lovable — And Why Does "Intentional Behavior" Not Hold?

Lovable is an AI-powered vibe coding platform — users describe what they want to build in plain language, and the platform generates full-stack applications including frontend, backend, authentication, and database connectivity. The platform reportedly had eight million users and approximately $400M ARR at the time of this incident.

The vulnerability at the center of April's disclosure is a Broken Object Level Authorization (BOLA) flaw — ranked number one on OWASP's API Security Top 10 for good reason. BOLA occurs when an API verifies that a user is authenticated but skips the check for whether that user actually owns the resource they're requesting. Lovable's /projects/{id}/* endpoints verified Firebase authentication tokens correctly. They just didn't verify ownership. That single gap was enough to put every project's source tree, credentials, and AI conversation history within reach of any free-tier account holder.

The flaw affected all projects created before November 2025. Lovable had apparently patched newer projects at some point but left the older cohort — including actively maintained projects — fully exposed. One researcher noted a project with over 3,700 recent edits and activity within the past 10 days that returned full data to an unauthenticated cross-account request.

Because Lovable bundles frontend, backend, auth, and database connectivity as a single provisioned unit, a platform-level tenant isolation failure like this doesn't just expose one app. It reaches every application built on the platform.

The "intentional behavior" framing didn't survive contact with the technical community. By the time The Register and The Next Web had picked up the story, the more credible characterization — platform-level tenant isolation failure — was the one sticking.

Is the 48-Day Gap a Security Failure or a Compliance Failure?

The BOLA vulnerability is a security problem. The 48-day silent period is a compliance problem, and the distinction is worth being precise about.

Security failures create exposure. Compliance failures determine how long that exposure persists without detection, escalation, or disclosure. The Lovable breach had both. The security failure was the BOLA flaw. The compliance failures were:

No detection. There is no indication that Lovable's internal systems flagged anomalous cross-account access patterns during those 48 days. A runtime that logs every project access with its requestor identity — and flags when project IDs are accessed by non-owner accounts — would have surfaced this through ordinary audit trail review. That infrastructure apparently didn't exist.

No escalation path when the bug bounty channel failed. When HackerOne closed the submission as a duplicate, the disclosure chain ended there. There was no secondary process — no internal ticket, no CISO notification, no clock running on a disclosure deadline. Bug bounties are not compliance infrastructure. They're a crowd-sourced supplement to it.

No disclosure obligation triggered. GDPR's 72-hour breach notification obligation applies when personal data is compromised. California's updated CCPA framework has analogous requirements for California residents. Source code, database credentials, and AI conversation histories clearly qualify as personal data under these frameworks. A 48-day silence before disclosure does not satisfy a 72-hour notification requirement. Separately, the EU AI Act's Article 50 transparency obligations — taking full effect August 2, 2026 — impose their own disclosure requirements on AI systems, including notification when users interact with AI and labeling of AI-generated content, adding a further layer of compliance exposure for platforms operating in the EU.

Grant Thornton's 2026 research found that 78% of senior leaders lack full confidence their organization could pass an independent AI governance audit within 90 days. The Lovable incident is a concrete illustration of why. Governance audits look for evidence — logs, access records, decision trails, escalation history. If none of that was captured during the 48-day window, there's nothing to audit.

Why Is Platform-Level Tenant Isolation an AI Governance Problem?

This isn't purely a Lovable problem. It's a structural problem with how most AI platforms have been built.

Vibe coding platforms like Lovable, AI coding assistants integrated into cloud IDEs, agent frameworks that share tool infrastructure across tenants — all of these create a new attack surface that traditional application security models weren't designed for. The research is stark: 40 to 62% of AI-generated code contains vulnerabilities, and 91.5% of vibe-coded apps had at least one AI hallucination-related flaw in Q1 2026 alone. That's the code your platform is generating for customers. The isolation layer between those customers is the only thing preventing one customer's vulnerability from becoming every customer's breach.

This is where compliance assurance shifts from being a nice-to-have to being load-bearing infrastructure. Tenant isolation in AI platforms isn't just a security requirement — it's a data governance requirement. When an AI platform processes, stores, and exposes database credentials and conversation histories as part of its core service, the isolation boundaries between tenants are, effectively, the data handling boundaries that regulators care about.

The Lovable response — acknowledging the flaw, patching newer projects, leaving older projects exposed, denying a breach occurred — suggests a platform that treated isolation as a technical property rather than a compliance property. Those are different things. A technical property gets patched when someone notices. A compliance property has to be enforced continuously, logged durably, and auditable on demand.

How Does an Audit Trail Policy Change the Equation?

There's a version of this incident where Lovable's security team finds out about active cross-account access on day three, not day 48. What's different in that version isn't the BOLA flaw — it's the infrastructure around it.

A runtime audit trail — durable records of what actions were taken, by whom, on which resources — creates a detection surface that doesn't depend on external reporters. Cross-account project access that isn't authorized by the platform's ownership model is anomalous by definition. You can't write a policy against it if you can't see it, and you can't see it if you're not logging it.

Waxell's audit trail policy records every agent execution: what was requested, what data was accessed, which identity made the request, and what was returned. That record is durable and queryable. For a platform like Lovable, this infrastructure would capture the cross-account access patterns immediately. The 48-day window collapses to hours, because the anomaly is visible in the execution log as soon as it starts.

The second thing this changes is the disclosure posture. When a bug bounty report comes in and gets closed incorrectly, a mature compliance infrastructure doesn't depend on that channel to trigger a response. Internal policy enforcement — rules that flag and escalate unauthorized access patterns — runs independently of whether HackerOne closed a ticket. The clock on disclosure obligations starts when the access happens, not when a researcher gets frustrated enough to go public.

This is the difference between a security posture and a compliance posture. Security is about preventing the bad thing from happening. Compliance is about knowing when the bad thing is happening, documenting what occurred, and meeting your disclosure obligations. Most AI platforms in 2026 have invested heavily in the first. Almost none of them have built the second.

What This Means for Teams Building on AI Platforms

If your team builds production applications on top of AI platforms — vibe coding tools, cloud IDE integrations, managed agent platforms — the Lovable incident is worth reading carefully. The question it raises isn't "is Lovable safe to use?" The question is: "If the platform we're building on has a tenant isolation failure, would we know about it? And how quickly?"

That answer depends on whether your own runtime has the logging and policy infrastructure to detect anomalous behavior in the data that flows through it, independent of your platform vendor's disclosure practices. Bug bounties are a valuable supplement to security engineering. They are not a substitute for governance infrastructure that you control.

The compliance landscape is only getting stricter. The EU AI Act's full enforcement window opens in August. California and New York have their own AI disclosure frameworks in development. The question regulators will ask when an incident occurs is not whether you had a bug bounty program. It's whether you had an audit trail, and what it shows.

FAQ

What is a BOLA vulnerability, and why is it particularly dangerous in AI platform contexts?

BOLA — Broken Object Level Authorization — is an API flaw where a system verifies that a user is authenticated but doesn't verify whether they own the resource they're requesting. It's ranked number one on the OWASP API Security Top 10 because it's common and severe. In AI platform contexts, where a single platform bundles authentication, storage, and AI conversation history across many tenants, a BOLA flaw doesn't just expose one customer — it exposes every customer whose data shares the same access model.

Is Lovable's "intentional behavior" response legally defensible under GDPR or EU AI Act?

Unlikely, for data that falls under GDPR's definition of personal data. GDPR's 72-hour breach notification obligation applies when personal data is compromised. Database credentials and AI conversation histories are personal data. The characterization of "intentional behavior" does not change the access that occurred, and regulators evaluating a 48-day disclosure gap will focus on what the organization knew and when, not on how it described the flaw initially.

What disclosure obligations apply to AI platforms that expose user data?

Under GDPR, data controllers must notify supervisory authorities within 72 hours of becoming aware of a personal data breach. CCPA has analogous requirements for California residents. The EU AI Act's Article 50 transparency obligations, taking full effect August 2, 2026, add requirements around AI-specific data handling. Platforms that process user data to power AI-generated applications are generally subject to these frameworks.

How do audit trails differ from observability tools in catching platform-level isolation failures?

Observability tools typically surface operational metrics — latency, error rates, token usage. Audit trails capture a different class of data: identity, authorization, access records. Detecting cross-account access requires asking "who accessed this resource, and were they authorized?" That's an audit question, not an observability question. Most LLM observability platforms don't log access-level authorization data because they're designed to answer performance questions, not compliance questions.

What should engineering teams do if they've built production apps on Lovable?

Rotate all credentials from any project created before November 2025 immediately. Assume source code, database credentials, and AI conversation histories from those projects were potentially accessible by any Lovable account holder during the exposure window. Audit what personal data those apps handle and assess whether breach notification obligations apply to your users.

How does Waxell's audit trail policy enforce tenant isolation at the runtime layer?

Waxell's execution records log every action an agent takes — including what data was accessed, by which identity, and what was returned. Policy enforcement rules can be written against these records to flag and block cross-tenant access patterns in real time. The result is a compliance posture that doesn't depend on a bug reporter or an external disclosure to trigger a response — anomalous access surfaces immediately in the execution log.

Sources

Agentic Governance, Explained

Waxell

Waxell provides observability and governance for AI agents in production. Bring your own framework.

© 2026 Waxell. All rights reserved.

Patent Pending.

Waxell

Waxell provides observability and governance for AI agents in production. Bring your own framework.

© 2026 Waxell. All rights reserved.

Patent Pending.

Waxell

Waxell provides observability and governance for AI agents in production. Bring your own framework.

© 2026 Waxell. All rights reserved.

Patent Pending.