Search
Search suggestions
Generic filters
Exact matches only
Search in title
Search in content
Search in excerpt

Deepfakes: The New-Wave of Attack Vectors

Neha Kirtonia
|
February 4, 2021

Modern technologies like Artificial Intelligence and Machine Learning are applied heavily for cyber defense- to detect and respond to stealthy threat actors to safeguard an enterprise. However, concerns begin when these very technologies are used to exploit and misuse information. One such area of exploitation is the creation of ‘Deepfakes’. 

A Deepfake is created using Machine Learning, to produce fake digital content- an audio or video that displays morphed faces of people who aren’t involved in that piece of media. Speedy breakthroughs in AI are paving the way for futuristic forms of fraudulence. Broadly categorized as synthetic media, this new wave has generated global concerns around financial upheaval- such as credential fraudulence and extortion.

What’s more worrisome is that anybody with a computer and internet connectivity can create deepfake media. This can be done with a machine learning method called generative adversarial network (GAN). It flags the defects in the forged media until it becomes unnoticed. Deepfakes are in fact, taking ID theft to the next level. It is a form of AI that is being used to execute sophisticated forms of phishing attacks and mess with biometrics using fake fingerprints.

How will it affect an organization?

At the ground level, hackers can use the deepfake technology to make false media do the rounds that could potentially harm the reputation of an organization. At an alarming level, threat actors can create deepfake media by putting together audio and video clips of, say, a certain senior executive to hamper an organization’s brand name. What becomes far more taxing are the latter stages of disproving as it consumes unjustified time and money. A new form of ransomware attack, deepfakes have the potential to threaten such institutions with the motive of extorting money or information, or both.

Cybercriminals leverage AI and ML algorithms to exploit vulnerabilities like cyber-security shortcomings to gain access to confidential and valued data. Deepfakes are the new wave of attacks that have the potential to wreak havoc in public and private organizations, which are hard to detect and harder to disprove.

What measures can be adopted to mitigate these attacks?

A new framework is imperative to successfully alleviate such attacks. Companies are trying their best to stay ahead of them by creating AI technologies that will fight against unsolicited AI, like deepfakes. Traditional security measures are no longer efficient to deal with the sophistication of these attacks, and therefore manual security updates and OS patches to protect business data are no longer enough. An evolution in cyber threats equals to organizations reforming their security approaches to protect data, assets, cloud, and devices in a no-perimeter environment where nothing can be trusted automatically- broadly known as ‘zero trust’.

Security practitioners must keep in mind to screen and completely verify anything that tries to access a company’s data. Some of the foundational elements of zero trust are:

  • Verify before anything else: It is important to verify each user. By deploying different authentication methods, organizations can ensure verified users are accessing its data repository.
  • Devices need validation: Leveraging AI and ML algorithms, organizations can figure out if a device is breached or jailbroken. Just as these algorithms are used to create deepfakes, the bright side is that the same can be used to conclude whether a device has been rammed. This will further ensure secure devices can get access to business data and cloud services. 
  • Curb access: Organizations need to put forth a set of elements that are required by users trying to get access. This narrows down to verifying the device, establishing user context, scrutinizing app authorization, verifying the network, and detecting and amending threats before issuing access to a user or a device. This may sound like a task, but AI and ML actually make sure no user productivity is hampered.

Organizations must focus on building trust in every process and proceeding by getting clear insights into the enterprise systems that store resources. In the same way that data backup, mirror, or encryption works, the same process needs to continuously validate the authenticity of this data. This is further pivotal if the data-in-use is operated by AI and ML applications that require decision-making and critical business operations. To conclude, the more tech firms automate verification processes, the better. While it is impractical to think of complete elimination of new attack vectors such as deepfakes, strategies such as zero-trust are enabling organizations to keep a check on its impact.

Share post: