A growing threat is targeting AI developers in the United States, and it is taking the form of fake job platforms operated by North Korean groups. These platforms are designed to look legitimate. They promise work opportunities, freelance projects and attractive remote roles. Behind the scenes, attackers use these interactions to gather sensitive information, gain access to development systems or plant malicious code.
This campaign focuses heavily on professionals working in artificial intelligence, machine learning and software engineering. The goal is to exploit trust and use the developer’s access to source code, tools and cloud environments. Once contact is made through the job platform, attackers request technical samples, access to personal devices or code repositories. Some attempts involve delivering harmful files disguised as assessments or tasks.
The broader concern is that AI developers often work with high value projects and data. Unauthorized access to these assets can support further espionage, financial theft or intellectual property compromise. These risks extend to any organization that operates in high tech fields, research sectors, cloud based product companies and defense related technology firms.
Security agencies have warned that these campaigns are becoming more sophisticated. The fake platforms are well designed, and the communication style used by the attackers closely resembles genuine recruitment outreach. This makes it harder for developers to recognise the red flags. Companies are encouraged to brief employees about this threat, verify all recruitment contacts and limit the exposure of internal tools and code repositories.
Conclusion
The rise of fake job platforms shows how attackers continue to adapt to new technologies and talent driven industries. As AI becomes central to business operations, threat actors will keep targeting the people who build these systems. Staying informed, verifying unknown contacts and following internal security guidelines are key steps in reducing the risk.
About COE Security
COE Security works closely with technology companies, AI and software development firms, cloud service providers, research organizations and defense technology teams to protect them from advanced social engineering threats and code level compromise.
We support organizations by
• Training teams to identify fraudulent job platforms and deep social engineering attempts
• Strengthening access controls and secure development practices
• Monitoring for suspicious activity across cloud and code environments
• Conducting risk assessments on developer workflows
• Providing compliance aligned security improvements tailored to each industry
To stay updated and cyber safe, follow COE Security on LinkedIn.