Cybercrooks are using deepfake technology and stolen data to apply for remote work positions in a bid to compromise unsuspecting employers, the FBI has announced.
The FBI’s cyber division has issued a public service announcement warning that it has recorded an increase in complaints about deepfake job applicants using stolen data to apply for various positions.
“The FBI Internet Crime Complaint Center (IC3) warns of an increase in complaints reporting the use of deepfakes and stolen Personally Identifiable Information (PII) to apply for a variety of remote work and work-at-home positions,” reads the memo.
The visual content is said to be “convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”
The crooks are said to be applying for positions in information technology and computer programming, database, and software.
Some positions include access to customer PII, financial data, corporate IT databases and/or proprietary information, meaning the attackers aim to exfiltrate data and likely hack the victim company and hold their data for ransom.
Reports received by the IC3 mention the use of voice spoofing, or potential voice deepfakes. The recordings are not all that convincing, though, according to the Bureau.
“In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking,” the FBI notes. “At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually.”
Complaints also depict the use of stolen PII. Victims have said their identities were used without their knowledge. Background checks found some applicants indeed used PII from other people to increase their chances of landing the job.
People who encounter this type of activity are instructed to report it to the IC3 at www.ic3.gov.
Deepfake technology has garnered widespread attention for its use in creating celebrity pornographic videos, revenge porn, fake news, hoaxes, bullying and financial fraud.
While some deepfake recordings are more convincing than others, the technology is still not advanced enough to pass for 100% legit.
A trained eye can spot telltale glitches and graphical artifacts that give away the nature of the recording. However, deepfakes can still dupe a trained eye at first glance, especially if there is no apparent reason or context to raise suspicion.