A new security finding has uncovered a significant cloud and API security vulnerability affecting users of Google Cloud Platform related to the rollout of Gemini AI services. Security researchers have discovered that nearly 3,000 Google Cloud API keys that were previously considered non-sensitive billing tokens are now exposed on the public internet and can be abused to access sensitive AI endpoints after enabling the Generative Language API (Gemini).
These keys – often embedded in client-side JavaScript or public code – were historically used to enable services like Maps, Firebase, and other utilities. Google’s prior developer guidance treated API keys as safe for public exposure because they were not intended to grant privileged access. However, when a project enables the Generative Language API, those same keys silently inherit authentication capabilities that allow access to internal Gemini endpoints, including:
• Private files • Cached data • Model interactions • Charged AI usage
with no warning or notification when keys are upgraded.
The issue creates two core risks: misuse of exposed API keys to access sensitive cloud content, and quota theft, where attackers generate unauthorized AI calls that rack up unexpected bills. In one reported case, a compromised API key was used to incur over $82,000 in charges in 48 hours – a dramatic spike from a typical monthly cost.
While Google has responded by blocking known leaked keys attempting to access Gemini and modifying how new AI Studio keys are scoped, legacy projects remain at risk unless keys are audited and rotated. Organizations are advised to check all Google Cloud API keys, especially those embedded in public sites or repositories, and disable or restrict them where possible.
Why This Matters to Enterprise Security
API Keys Are the New Perimeter
API keys that were once considered benign identifiers can now serve as authentication credentials for powerful AI services. This represents a fundamental shift in cloud security assumptions – something enterprises must recalibrate for.
Hidden Attack Surface Expansion
Attackers can harvest exposed API keys by scraping public sites, code repositories, or web applications. What was previously low-risk “billing identifier” data now becomes a launch point for unauthorized cloud access and AI abuse.
Financial and Data Impact
Unauthorized AI calls can lead to unpredictable billing spikes, cost absorption issues, and unapproved access to stored files or cached AI data. This has direct implications for cloud budgets, incident response workflows, and audit trails.
Strategic Lessons for Security Leaders
Treat API Keys as Secrets
Keys should be managed like passwords: never embedded in client-side code, always stored securely, and scoped to least privilege.
Enforce Key Rotation and Auditing
Regularly review and rotate all API keys, especially when enabling new services like AI APIs that change credential behavior.
Implement Cloud Secrets Management
Use secrets stores, restricted scopes, short-lived credentials, and strong IAM policies to eliminate broad, unrestricted keys.
Continuous Scanning for Exposures
Align CI/CD pipelines with automated secret scanning (e.g., TruffleHog, open-source scanners) to detect inadvertent leaks in code and repos.
About COE Security
COE Security partners with organizations in financial services, healthcare, retail, manufacturing, and government to secure AI-powered systems and ensure compliance. Our offerings include:
• AI-enhanced threat detection and real-time monitoring • Data governance aligned with GDPR, HIPAA, and PCI DSS • Secure model validation to guard against adversarial attacks • Customized training to embed AI security best practices • Penetration Testing (Mobile, Web, AI, Product, IoT, Network & Cloud) • Secure Software Development Lifecycle Consulting (SSDLC) • Customized CyberSecurity Services
In response to API and cloud security challenges like this, we help enterprises:
• Conduct API security and secrets governance assessments • Implement continuous cloud security posture monitoring • Deploy API key lifecycle management and rotation processes • Integrate AI and cloud risk into enterprise threat models • Align security controls with regulatory and compliance frameworks
Follow COE Security on LinkedIn for ongoing insights into secure, compliant AI adoption – and stay cyber safe.