Companies infiltrated by AI-assisted job scammers
In the digital age, cybercriminals are increasingly leveraging artificial intelligence (AI) to create deepfake job applicants, aiming to infiltrate organizations and steal sensitive data. To combat this growing threat, businesses are implementing a multi-layered approach that combines continuous identity verification and real-time deepfake detection technologies.
Klaudia Kloc, the co-founder and CEO of Vidoc Security Lab, a leading AI and cybersecurity company, suggests asking unexpected questions during virtual interviews as an additional measure. This tactic can help distinguish between human job candidates and AI-generated personas.
Businesses are adopting real-time deepfake detection technologies like Pindrop® Pulse for Meetings, which uses liveness detection to verify if the person in a video or audio is human rather than AI-generated. These tools analyse vocal characteristics such as rhythm, pitch, and breathing nuances to flag deepfake audio.
Continuous identity verification is another key strategy. Instead of a one-time check, businesses should embed identity confirmation at multiple points — pre-interview, during key interview rounds, assessments, and onboarding. This continuous verification helps catch inconsistencies or suspected fraud before finalizing hires.
Cross-checking with official records adds another layer of security, minimizing risks of synthetic identities advancing in hiring pipelines. Additionally, monitoring behavioural patterns can help detect unusual activities, such as a candidate consulting a 'handler' in real-time during or immediately after an interview, revealing coordinated deception schemes.
Emerging solutions like Reality Defender provide context-aware, multi-model deepfake detection that analyses images holistically, enhancing the ability to catch sophisticated forgeries beyond just facial recognition.
The threat of deepfake job applicants is not to be underestimated. Scammers are using AI-generated avatars in virtual job interviews to get hired, with more than 300 U.S. companies unknowingly filling remote IT jobs with deepfakes tied to North Korea, according to the Justice Department. Adaptive Security, an expert in the field, has the ability to create deepfake videos of individuals.
In light of these threats, HR managers are requiring job applicants to take steps to verify they're human. Signs like blurred edges around a face or lips and voice not synching up during virtual interviews can indicate the presence of a deepfake.
Despite these measures, in-person interviews remain a valuable tool for companies to protect themselves from AI scams. However, with the rise of remote work, the integrated approach of sophisticated AI-powered detection tools with persistent identity verification throughout the hiring journey is essential for businesses to safeguard against the growing threat of deepfake job applicants.
17% of hiring managers have discovered deepfakes applying for jobs in their company, underscoring the need for vigilance. By staying informed and adopting these protective strategies, businesses can significantly reduce their risk of falling victim to deepfake job applicants.
- Klaudia Kloc, a leader in AI and cybersecurity, advises unexpected questions during virtual interviews as a tactic to distinguish between humans and AI-generated candidates.
- Pindrop® Pulse for Meetings, a real-time deepfake detection technology, uses liveness detection to verify if a person in a video or audio is human, analyzing vocal characteristics for flagging deepfake audio.
- Continuous identity verification, at multiple points during the hiring process, helps businesses detect inconsistencies or fraud, minimizing the risks of AI scams progressing in hiring pipelines.