The AI Cyber Threat of Google LaMDA

Bitdefender Premium Vpn

Artificial intelligence (AI), a phrase that has been in use since the 1950s, is defined as intelligence demonstrated by machines, as opposed to the natural intelligence displayed by humans. While machines are becoming more and more intelligent — AI is now commonly used for speech recognition, decision-making, and visual perception — robot revolutions have remained little more than a staple of science fiction stories.

That was until earlier this year when one Google engineer boldly claimed that LaMDA, an unreleased machine learning-powered chatbot designed by Google to generate natural dialogue, had achieved a level of consciousness. 

Many in the AI community have spoken out against the since-ousted engineer’s claims. However, the controversial blog post — in which he detailed his beliefs that the Google AI had achieved sentience and the ability to experience feelings and thoughts — has reignited fears and expectations surrounding what this technology is capable of. After all, if AI can fool people into thinking it’s real, what else can the technology trick humans into doing?

 

Artificial intelligence attacks: today, not tomorrow

Fears have been particularly heightened in the cybersecurity industry, where experts are becoming more concerned about AI attacks, both now and in the near future. A recent report from Forrester Consulting found that 88% of decision-makers in the security industry believe offensive AI is on the horizon, and almost two-thirds of them expect AI to lead new attacks.

These concerns are not unfounded, as adversaries are already employing AI and machine learning (ML) technologies to mount more automated, aggressive, and coordinated attacks. 

By leveraging AI and ML, hackers can be more efficient in developing a deeper understanding of how organizations are trying to prevent them from penetrating their environments. These technologies are also enabling attackers to become smarter with each success and each failure, making them harder to predict and stop.

 

Mimicking AI and deepfakes on the rise

Most worryingly, threat actors can employ these technologies to mimic trusted actors, enabling them to learn about a real person and then use bots to copy their actions and language.

A concerning example of this is so-called deep fake phishing attacks. Not only are hackers using AI to become smarter at crafting phishing emails, but they are also creating deep fake audio and video that is so sophisticated that it tricks the victim into believing the person on the other end of a call is who they claim to be — a colleague or a client.

In one high-profile example from 2019, cybercriminals used deepfake phishing to trick the CEO of a UK-based energy firm into wiring them $243,000, according to The Wall Street Journal. Using AI-based voice spoofing software, the criminals successfully impersonated the head of the firm’s parent company, making the CEO believe he was speaking with his boss.

“Although deepfake videos are currently difficult to create and require resources and sophistication, they are becoming increasingly more accessible,” Bryan A. Vorndran, assistant director of the FBI’s Cyber Division warned earlier this year.

“Cybercriminals can create highly personalized content for targeted social engineering, spear phishing, business email compromises, other fraud schemes, and to victimize vulnerable individuals, and nation-states could use these techniques for malign foreign influence and spread disinformation and misinformation.”

 

AI-powered ransomware: a matter of when, not if

AI-powered ransomware attacks are another major concern. While, thankfully, this remains a concept that has yet to hit headlines, many experts believe it’s only a matter of time before cybercriminals start leveraging automation to compromise numerous weak targets in a short amount of time. 

There are fears that AI-driven malware could become capable of self-propagating via a series of autonomous decisions, intelligently tailored to the parameters of the infected system.

This type of malware will be able to learn context by quietly sitting in an infected environment and observing normal business operations: the internal devices the infected machine communicates with, the ports and protocols it uses, and the accounts which use it, enabling it to attack the discovered weak points or mimic trusted elements of the system.

Given the resources that would be required to launch such attacks, experts believe it’s likely that a nation-state would be the first to develop such a tool. 

However, it’s not too wild to imagine a world where threat actors themselves would be motivated to develop offensive cyber AI. After all, this technology would drive significant financial gains. The ability to automate some of the most skill-intensive parts of these operations will result in a higher return on investment for attackers.

 

How to defeat malicious AI

AI cyber threats, while not yet widespread, should be a concern for organizations, particularly those that are already failing to combat advanced threats with legacy tools and outdated software. AI will make it even harder for defenders to spot the specific bot or attack, and could even help threat actors design attacks that create new mutations based on the type of defense launched at the attack. 

It’s clear that AI would give attackers the upper hand. Once that happens, it will be challenging — if not impossible — for culpable defenders to regain control.

Yet, AI can also be used as a weapon against malicious AI, says Daniel O’Neill, Director of Managed Detection and Response Security Operations at Bitdefender. “Recognising, learning, and threat modeling of behavioral patterns will enhance cybersecurity specialists’ ability to react to indicators of attack based on anomalies rather than just known indicators of compromise.”

“Intelligence-driven automation will provide deeper visibility on endpoint behavior, further reducing the dependency on detecting static alerts based on known threat signatures; and learning and registering what is normal will increase the probability of identifying abnormal behavior in an environment, empowering analysts’ to triage threats and attacks more rapidly.”

Learn how Bitdefender uses artificial intelligence, machine learning, and anomaly-based detection to provide real-time insights into the global threat landscape.

 

CONTACT AN EXPERT