Why Booz Allen’s CTO used generative AI to make a deepfake video of himself

Wait 5 sec.

To ensure Booz Allen Hamilton’s global workforce of more than 35,000 can guard against deepfakes and avoid potential financial fraud, the consulting firm’s chief technology officer, Bill Vass, embraced an unconventional approach.He created a deepfake video of himself.This week, Vass will promote a 30-second deepfake video where “he” briefly speaks to the camera to show Booz Allen employees and other workers how easy it is to create fake audio and video content. Vass contends that generative AI technology has gotten so advanced that a popular refrain, “believe none of what you hear and half of what you see,” isn’t cynical enough.“You’re at a point with AI and these deepfakes where you are not going to be able to believe any video you see or audio you hear,” Vass says. The deepfake video of Vass will be promoted internally at Booz Allen so that employees “better understand the capabilities and how strong a deepfake can be,” he adds.Booz Allen has previously trained workers to spot deepfakes by showing videos of celebrities, who tend to be easy targets given the vast prominence of their likeness in the public domain. But there are also hours upon hours of video and audio of Vass uploaded to YouTube, and it only takes a couple of minutes of content for criminals to make a deepfake that can trick workers.The stunt deepfake video of Vass was created by Booz Allen in partnership with Reality Defender, a deepfake detection company that sells tools to identify AI-generated content within seconds to clients including IBM, Visa, and Comcast. Last year, Reality Defender expanded its Series A funding round, raising $33 million in total capital (from investors including Booz Allen’s venture capital arm) to further develop the startup’s technologies.Vendors like Reality Defender are betting that processes for authenticating audio and video interactions will become as essential as other cybersecurity tactics like multi-factor authentication, a two-step verification process, and zero-trust authentication, which requires continuous verification of identity.Alex Lisle, who became CTO at Reality Defender last week, says there is a growing list of risks CEOs and other C-suite executives must confront when it comes to deepfakes. While much of the attention is on social engineering cyberattacks that prey on workers, cybercriminals can also use AI to craft audio files where a CFO “announces” manipulated earnings results, which could move the stock. AI videos can be generated that depict a CEO issuing a fake public statement that could hurt a brand’s reputation.“Unlike other emerging cybercriminal threats, which require an incredible amount of technical knowledge and foresight, this doesn’t,” Lisle says. Deepfakes, he adds, can be done with “off-the-shelf software and a basic knowledge of technology.”Top executives at WPP, Accenture, and Ferrari have been targeted by deepfakes, though in the corporate world, the banking sector is a favored target. Half of finance professionals in the U.S. and U.K. have reported that they’ve experienced an attempted deepfake scanning attack. Accounting giant Deloitte has estimated that generative AI-enabled fraud losses could reach $40 billion by 2027, a compound annual growth rate of 32% from 2023’s level.The cautionary tale that security executives frequently cite is a Hong Kong incident where a financial worker was fooled into paying $25 million to fraudsters that used a deepfake video call to impersonate the company’s chief financial officer. To avoid these types of scams, chief information security officers and other technologists have been investing in defensive systems and better employee training to detect attacks.Vass, who joined Booz Allen in 2024 after previously serving as VP of engineering at Amazon Web Services, says social engineering attacks would even trip up employees at the Pentagon, where he worked as a senior executive in the office of the CIO in the late 1990s. The Department of Defense would hire external parties to attempt attacks, and Vass says it always amazed him how many times those teams would succeed, even after all of the training.He recalls another incident at a startup he led, where a former employee sent a deepfake email that was purportedly sent from Vass, while also pretending to loop in the CFO. The note was sent to the procurement office, and a worker ended up processing a fake $25,000 invoice payment.Generative AI, Vass adds, will only make cases like these all that more common. “People are going to have to learn to change their psyche to be more skeptical.”John KellSend thoughts or suggestions to CIO Intelligence here.This story was originally featured on Fortune.com