Skip to main content

Spoiler alert: the technology is further along than you think.

Deepfakes, once a novelty, have rapidly evolved into a potent tool with far-reaching implications. These hyper-realistic synthetic media can manipulate images, videos, and audio to convincingly portray individuals as saying or doing things they never did.  

Coupled with the fact most deepfake creation tools are open-source, and easy to use, the potential for abuse is immediately clear.  

What are Deepfakes? 

At its core, a deepfake is a synthetic media product generated using artificial intelligence. These models are trained on vast amounts of data, enabling them to create highly convincing forgeries. The applications are broad: from swapping faces in a funny family picture to generating entirely synthetic video of political figures making remarks they didn’t really say. 

Deepfake pornography is a rampant issue, with victims often unaware of their exploitation until the content surfaces online. Reputation damage can be swift and devastating, to say nothing of the personal injury to the victim. 

Deepfakes can be used for financial fraud, social engineering, and intellectual property theft (just to name a few implications). Attacks incorporating real-time deepfake videos are spiking. Often a threat actor will impersonate an executive to authorize fraudulent transactions or attempt to gain access to an organization’s internal systems. 

Why are deepfakes a problem now? What’s changed?  

Deepfake technology is further along than most people realize (even if they are cybersecurity professionals!). The ease of access to deepfake creation tools is a significant factor in the escalation. What was once the domain of experts is now within reach of individuals with minimal technical skills. This democratization of the technology has led to a proliferation of deepfakes across various platforms. 

What’s just as worrying is the development of real-time deepfakes. 
These allow for live manipulation of audio and video, enabling malicious actors to impersonate individuals in real-time conversations. The potential for social engineering attacks and financial fraud is immense. 

What Security Leaders Need to Know

There are not currently many, if any, mainstream technologies that can effectively detect deepfake materials. This means your most important layer of defense is your people. Security leaders need to prioritize education and awareness to reduce the risk of deepfake fraud and impersonation.  

Employee Education: The First Line of Defense 

Cybersecurity Awareness Month (October) presents an ideal opportunity to reinforce the importance of employee education in the fight against deepfakes. However, while the extra focus in October is a chance for extra cyber emphasis, cyber program leaders need to avoid making October a “one and done” event during the year. Deepfake has evolved so rapidly that the incident occurrence rate and accessibility and maturity of deepfake technology has grown exponentially in even less time since last October. 

By making employees aware of the tactics used by malicious actors, organizations can reduce the risk of falling victim to deepfake-based attacks. At Reveal Risk, we provide a three-part program including:  

  1.  a customized realistic attack brought to life in video form using your real executives and executive “hero” figure with teachable moments 
  2. a keynote or panel discussion with the real-time live deepfaking of a willing and consenting company executive 
  3. and a real-time deepfake demonstration booth where employees can become the executive deepfake in a safe and controlled manner.  

We will show your employees what deepfake technology is capable of, and provide you with strategies to verify information from multiple sources and report suspicious activity. Learn more or sign up for this limited-edition offering here! 

TL;DR  

Deepfake technology has primarily been used for criminal and nefarious purposes, such as creating fake political ads and illegal pornography, and the ease of access to the tech has increased over the last few years. Real-time deepfakes are particularly concerning, as they can be used to impersonate someone during a conversation.  

Reveal Risk aims to bring awareness to the risks of deepfakes and educate your employees on how to identify and protect themselves from deepfake scams. 

 

***

Do you have any other key questions or ways that you focus these leadership engagements? Feel free to leave a comment and this playbook will be updated as needed to maximize the value for all. Also, share your feedback if you found this helpful.

At Reveal Risk, we evaluate, design, and deliver strong programs, processes, and results in cybersecurity. If you find that you want assistance in building your company’s cyber security strategy, governance, and plan towards desired state maturity, please don’t hesitate to connect with us at info@revealrisk.com.

Watch some highlights!

FIR01117-221

317.759.4453  

About the Author

Aaron is a former Eli Lilly IT and security senior IT/Security/Audit/Privacy/Risk leader with over 20 years of experience in the pharmaceutical and life sciences sector.  He founded the risk management working group for the H-ISAC (Healthcare Information Security and Analysis Center) which enabled information sharing and benchmarking across pharma, payers, and health care providers. Aaron is a certified Six Sigma blackbelt with career emphasis on building and improving internal processes and controls.​

Leave a Reply