The fake IT worker problem CISOs can’t ignore

Wait 5 sec.

Hiring fake IT workers has been a growing problem in recent years — but it’s often a problem very few want to admit to. From Fortune 500 companies down to smaller organizations, remote hiring practices have been exploited to grant trusted access to individuals who are not who they claim to be creating an insider threat risk.Estimates suggest there are thousands of fake IT workers operating across the US who are in a position to steal information, IP and data, outsource work offshore, carry out sabotage, or funnel money to foreign governments.Amazon has identified and blocked more than 1,800 attempts by North Korea to secure IT roles — and the numbers are rising, according to its chief security officer, Steve Schmidt.In some cases, individuals impersonate US employees for personal gain; in others, state-based operatives such as those from North Korean pose as IT workers for state financial gain and other nefarious purposes.AI is now enabling deepfakes, more convincing video interviews, and rapid identity cycling.Adversary tactics are also shifting, from fabricating profiles to purchasing legitimate American identities, Schmidt has warned.“This is not a ‘recruiting scam’ in the traditional sense. It’s an insider-risk problem, where the adversary’s first move is to get hired,” says Tom Hegel, distinguished threat researcher at SentinelOne.CIOs, CISOs, and other IT leaders need to be continually on guard against fake and fraudulent IT workers, but organizations can fall victim without realizing it.How fake hires get throughThere’s no single point of failure in the recruitment process. Fake and fraudulent IT workers conceal their identity, falsify their skills and experience, and move through interview and screening processes undetected.SentinelOne has tracked roughly 360 fake personas and more than 1,000 job applications linked to North Korean IT worker operations, including attempts to apply for roles within the company itself.According to Hegel, adversaries are increasingly deploying social engineering tactics and identity obfuscation at scale, and the hiring process is a prime entry point.Synthetic or stolen identities are used to create resumes and online profiles; interviews are passed with the help of scripts, stand-ins, or AI-assisted responses; and background checks confirm only what’s presented to them.“Fake job seekers now leverage AI tools to mimic legitimate candidates, creating synthetic identities that pass initial background checks, falsifying employment histories and even responding convincingly in interviews using real-time AI assistance,” Hegel says.Flashpoint investigations have found malware-infected hosts containing HR and job-board logins, browser histories showing Google-translated coaching notes, remote-access “laptop farms” used to control corporate devices from overseas, and shell companies to prove reference checks for fabricated resumes.Once they’re hired, credentials are issued, equipment is shipped, and access is granted — and they become a trusted insider. “The long-term risk isn’t just hiring a fake employee — it’s unknowingly opening your systems and sensitive data to malicious access,” he says.What to do if you suspect a fake IT workerWhen a CIO suspects a fake IT worker, next steps are important as the issue shifts from recruitment to insider risk management.During his time at MongoDB, George Gerchow, IANS faculty advisor and Bedrock Data CSO, oversaw the investigation after the company detected it had unknowingly hired a North Korean IT worker.It was first discovered after alerts that an individual was attempting to uninstall endpoint protections, including CrowdStrike Overwatch. “Overwatch then detected the laptop communicating with a North Korean IP address,” says Gerchow.“That combination of tool tampering plus DPRK-linked traffic immediately signaled that this was not a typical new hire,” he tells CIO.Mongo realized the fake worker used a stolen identity, paired with AI-generated resume content and scripted interview responses, to evade background checks that verify only the information provided and do not detect fraud.It highlights a gap in many background checks. “They don’t detect fabricated work histories, synthetic identities, or recycled developer profiles, which is how this individual passed screening and interviews without raising formal flags,” he says.The subsequent investigation found attempts to disable security tooling, establish persistence on the device, and probe for elevated access.“Had they remained undetected, their access would have eventually expanded into our FedRAMP environment, which makes these fraud techniques especially high-risk,” Gerchow adds.After the discovery, several yellow flags became obvious such as poor video quality and unclear visuals during interviews, a noticeably inconsistent accent between calls, and scattered interview feedback with no centralized review.Another tell was a last-minute change to the laptop shipping address. “That’s a common shadow-worker tactic,” notes Gerchow.With hindsight, Gerchow joined the dots and it became clear how the person had made it through to employment because any irregularities were treated in isolation.“None of these individually would prevent a hire. However, because no one was responsible for aggregating subtle anomalies, the pattern wasn’t recognized until the endpoint alert fired,” he says.When they were discovered, the team quickly isolated the device, revoked all credentials, conducted a full forensic investigation, and notified federal authorities. “We verified there was no data exfiltration or lateral movement,” he says.The mitigation steps introduced included strengthening identity fraud screening in the hiring process, assigning a Yellow Flag owner to connect early signals, and enforcing zero access until trust is earned for new hires,Gerchow also believes that behavioral telemetry post-hire is necessary, because behavior, not credentials, reveals impostors.Mongo recommends organizations designate a reviewer in Security or HR to identify inconsistencies in the hiring process, such as poor video quality. “Also watch for AI-generated LinkedIn profiles, mismatched resumes and questionable changes in laptop shipping addresses,” he says.“Use panel interviews and project-based evaluations to identify candidates who recycle stolen or fake developer identities, and start new hires without access to sensitive data or production environments,” he advises.Then employ alerts if security agents (such IAM, EDR, VPN) are disabled before a new hire logs in, and test detection, escalation, and device recovery by simulating the hiring of a fake developer.“And look for off-hours access, broad internal search activity and large-scale cloning of documents or code repositories,” he adds.What IT leaders see on the insideThe problem of employment fraud is only expected to worsen, with Gartner predicting that one in four candidate profiles worldwide will be fake by 2028.“The rise of fake and fraudulent job applicants has become an epidemic across organizations,” says David Weisong, CIO of Energy Solutions.Weisong says attackers consistently target high-access technical roles such as DevOps, systems administrators, data engineers, and database administrators, where successful hires can gain deep visibility and control over core systems.“These are the roles with the keys to the castle,” Weisong says. “If you’re trying to gain access, they’re far more valuable than a standard developer position.”Operating in a regulated energy market, Energy Solutions is contractually required to employ a US-based workforce and keep data within US jurisdiction.Weisong has first-hand experience with detecting fake IT workers and wants to share his advice with other IT leaders. One of the earliest warning signs was a sudden, abnormal surge in applications — hundreds arriving within hours, far out of proportion to the company’s brand profile, pointing to automated or coordinated activity.During the interview stage, identity switching was observed. “We saw cases where one person passed the phone screen, a different person showed up on Zoom, and sometimes a third appeared later — all under the same name and resume,” Weisong says.Part of the problem is that standard hiring practices validate information and skills in isolation. “Traditional background checks only verify the information provided and do not detect fraud,” Weisong also notes.The uncomfortable reality for some CIOs is that the work may be completed to a high standard and detection comes from signals, not performance.However, fake IT workers create business and compliance risk as much as security risk, exposing organizations to contractual breaches, regulatory consequences, and loss of client trust — particularly in regulated industries.Weisong says fake IT workers create business and compliance risk as much as security risk, exposing organizations in regulated industries to contractual breaches, regulatory scrutiny, and loss of client trust.Combating the problem of fake IT workersAmazon is using AI-based tools with human oversight to identify unusual contact information, as well as fake academic institutions and companies in resumes, according to Schmidt. Security teams will flag LinkedIn profiles that look suspicious, require more in-person interviews and in-office attendance, monitor computer usage and quality of work, and authenticate with a physical token.He has also said that IT and HR need to collaborate on hiring to combat the problem.“It’s actually a lot cheaper for the HR organization if we discover the problem up front,” Amazon’s Schmidt told Fortune.The shift required, says SentinelOne’s Hegel, is treating hiring decisions as an access control problem rather than a recruitment task. “Stop treating identity as a one-time HR checkbox and start treating remote hiring like you would grant privileged access,” he says.In the wake of his experience, Weisong instituted a raft of changes to its applicant tracking system and across the organization’s internal systems and processes.When advertising for positions, they make it clear that candidates applying for technical positions understand the expectations and consequences outlined in all written communication. “Additionally, removing the term ‘fully remote’ from our hiring practices has significantly reduced opportunities for fraud and for applicants applying from outside the US,” he says.“While a ‘zero-trust’ approach would be ideal for all hiring, we cannot allow it to impede the process or discourage legitimate candidates from applying. Instead, we need sufficient countermeasures to prevent automated and fraudulent applicants from reaching the pipeline in the first place,” he adds.To control the large volume of applications, many of which are bots, Energy Solutions job listings now have strict CAPTCHA settings, referral bonuses help draw on employee networks, and there’s a 90-day satisfactory performance review for new hires.During the screening process, interviews are conducted via video not phone, and applicants must share their screen for live challenges. A post-video interview report allows them to verify the exact location of applicants after screening and interview meetings. If a candidate is outside the US, it’s treated as a Yellow/Red flag.Applicants must select which office they want to work from and they must acknowledge they understand use of AI during interviews will result in disqualification.To verify references and employment history, they require two references, with one a former supervisor or manager. Employment history is checked, including previous employers, and full home address must be provided.To guard access, a question has been added to the job kick-off form that indicates whether a new role will have elevated access to confidential or sensitive information.The first day on the job requires new hires to come into an office to pick up equipment and undertake training and onboarding. All roles must be onsite, with the option to go hybrid after satisfactory performance.Combating the problem, says Weisong, requires reviewing hiring processes, partnering closely with HR, and monitoring the effectiveness of each countermeasure. For CIOs, the lesson is not that hiring is broken, but that trust must be earned progressively.