Exposed Google API keys fuel unauthorized Gemini AI requests
A billing alert hit like a surprise invoice, but the real shock was what came after: an attacker had been turning a public-facing Google API key into an open door for Gemini AI. That pattern is now drawing fresh scrutiny as developers report major financial damage tied to exposed credentials and unlimited AI calls.
Security researchers from US News Hub Misryoum said the incidents trace back to an unintended elevation of publicly available API keys into live Gemini AI credentials. In practice, keys developers embedded for services like Maps or Firebase were treated as authorization material, allowing attackers to run inference requests without permission.
The consequences have been stark. One case involved a solo developer whose startup nearly collapsed after an attacker used a publicly accessible key to flood Gemini AI with inference requests. The developer revoked the key within minutes of receiving a billing alert, yet due to a reporting lag in Google Cloud’s billing system, the charges had already reached $15,400. Other teams saw similar outcomes: a Japanese company faced approximately $128,000 in unauthorized Gemini API usage despite firewall-level IP restrictions, while a small development team in Mexico saw an $82,314 spike in only 48 hours—reported as a 455-times increase over typical spending.
For many teams, this is more than a security glitch; it is a reminder that how credentials are treated can change the risk profile overnight. The speed at which attackers can burn through budgets also suggests a need for faster alerting and tighter spending controls.
“This issue does not stem from developer negligence; the implementations were compliant with Google’s prescribed guidelines,” said Tuhin Bose, cybersecurity researcher at CloudSEK. He explained the architecture effectively converted non-sensitive identifiers into authentication tokens, creating a systemic vulnerability across numerous applications. CloudSEK’s research identified 32 exposed Google API keys across 22 Android applications with a combined install base exceeding 500 million users.
The affected apps include household names such as OYO Hotel Booking App, Google Pay for Business, Taobao, and ELSA Speak. Researchers confirmed data exposure in ELSA Speak when they accessed user-submitted audio files via the Gemini Files API. The vulnerability allows attackers to perform unlimited Gemini API calls, access sensitive user data, and exhaust organizational API quotas. It can also persist through app update cycles, meaning users and developers may remain exposed even after changes are made.
Officials also said the pattern can be hard to spot because developers who followed Google’s guidance may unknowingly hold live credentials to powerful AI tools without notification or opt-in prompts. Technical measures such as revoking keys and restricting project permissions can mitigate exposure. Still, the financial and operational impact on developers is substantial, and the episode underscores how exposing a key can become a liability once AI-enabled cloud services are involved.
At the core of the Gemini AI threat is the same weakness: treating publicly available API keys as though they were safe. With the key exposure window effectively becoming an open season for requests, teams will likely need to reevaluate both their API handling and their Gemini integrations to prevent the next surge of costs. For developers using exposed Google API keys, faster monitoring and disciplined credential management may be the only way to keep Gemini access from turning into another expensive incident on US News Hub Misryoum.