As generative AI tools like ChatGPT, Gemini, and Copilot continue reshaping productivity, there’s a growing undercurrent of unease. Quietly, beneath the surface of convenience and innovation, sensitive data is slipping through unseen cracks cracks that organizations may not even know exist.
A recent study by cybersecurity firm Harmonic Security reviewed over 176,000 user prompts from roughly 8,000 individuals using genAI platforms. The findings were unsettling: 6.7% of prompts contained sensitive information ranging from financial projections and legal documents to employee payroll details and customer credit card data.
It’s not just about isolated incidents. A significant portion nearly 30% of that sensitive data concerned company financials, mergers and acquisitions, sales pipelines, billing information, and even privileged legal conversations. And developers, in their pursuit of faster, smarter coding, have been known to paste snippets containing proprietary source code, credentials, and network details straight into these platforms.
Most platforms assure users that data is safe, anonymized, or not shared with third parties. But the real risk lies not in external breaches but in what employees willingly share. The gates aren’t being broken into they’re being held open.
The growing integration of AI into everyday tasks writing emails, generating reports, drafting code means the line between harmless automation and confidential exposure is increasingly blurred. According to McKinsey, only 27% of companies rigorously vet AI-generated content, while 43% check less than 40%. This leaves a majority exposed to the hidden dangers of unreviewed, AI-assisted workflows.
Adding to the tension is the rise of Chinese-based LLMs, which have flooded the ecosystem over the past year. From Baidu’s Ernie Bot to Deepseek and Moonshot, these models carry additional geopolitical risk. In some jurisdictions, data accessed by local companies may be subject to government inspection an alarming prospect for global businesses.
This isn’t just about AI anymore. It’s about AI hygiene a new branch of cyber hygiene that organizations must urgently adopt. This includes educating employees, restricting unmonitored access to AI tools, vetting content before use, and building internal policies to control the flow of sensitive information.
Conclusion
The convenience of generative AI should not come at the cost of organizational security. While these tools offer speed and efficiency, they also demand vigilance. The silent leakage of sensitive information is a growing cyber threat that requires immediate action not tomorrow, but today. The future of cybersecurity will not only be about firewalls and encryption, but also about what your employees type into a chatbot.
About COE Security
COE Security partners with organizations in financial services, healthcare, retail, manufacturing, and government sectors to secure AI-powered systems and ensure compliance. We’re committed to helping clients stay ahead of emerging threats like AI data leakage and social engineering, which are becoming increasingly potent due to the widespread use of genAI tools.
Our offerings include:
- AI-enhanced threat detection and real-time monitoring
- Data governance aligned with GDPR, HIPAA, and PCI DSS
- Secure model validation to guard against adversarial attacks
- Customized training to embed AI security best practices
- Penetration Testing (Mobile, Web, AI, Product, IoT, Network & Cloud)
- Secure Software Development Consulting (SSDLC)
- Customized CyberSecurity Services
Follow COE Security on LinkedIn for ongoing insights into safe, compliant AI adoption. Stay informed, stay compliant, and most importantly stay cyber safe.