Artificial Intelligence Taking a Bigger Role in Antimalware Technology

NEWS ANALYSIS: Artificial intelligence is taking on a bigger role in antimalware products. But it's not foolproof and its clear cyber-criminals will soon use AI to make their malware stealthier.

Artificial Intelligence Chips

LAS VEGAS—Even as artificial intelligence is having a bigger role in stopping malware and other cyber-threats, attackers are finding ways to get around it and even using AI as a way to enhance their own attack strategies.

"Can we break machine learning? The answer is, yes we can," said Hyrum Anderson, Principal Data Scientist for security vendor Endgame, during his presentation at DefCon here last week. "It's actually become quite fashionable to break machine learning."

AI, or more specifically a form of AI known as machine learning, has been coded into a next-generation antivirus (AV) programs. Traditional forms of AV are based on signatures, which are identifiers of known security threats. But signature-based AV is not enough, because attackers can quickly change malware or disguise it enough to evade AV signatures.

This is where machine learning comes in and not a moment too soon, because 357 million new malware threats were detected in 2016, according to security vendor Symantec. Traditional antivirus programs simply cannot keep up.

AI-based threat detection systems are designed to catch anything that traditional AV misses, at least in theory. Machine learning models are not foolproof, however. They can determine only within a certain degree of confidence if a particular file is malicious or benign, Anderson explained. If attackers can learn how a machine-learning detection model works, they may be able to tweak their malware files enough that they can sneak through.

"The idea is to insert a file that our model knows is malicious with high confidence, and make a few subtle changes to the bytes or modify elements that don't break the file format or its behavior and then trick our model into thinking that's benign," said Anderson.

AI on the attack

AI methods are also being applied directly by attackers as a means to aggregate and analyze data to help target and customize their attacks. “We are now seeing systematic attacks against industry sectors" using AI methods, said Vincent Weafer, vice president of McAfee Labs.

With new cloud-based models and compute engines, machine learning is becoming inexpensive and accessible to whoever wants to use it. At the Black Hat conference here, McAfee announced that its flagship product, McAfee ATD (Advanced Threat Defense) 4.0, is now augmented with machine learning models.

Another company, Darktrace, applies machine learning to network security. Darktrace’s product, dubbed the Enterprise Immune System, creates a model based on normal usage for a network and then applies an AI system that can determine if certain activity is malicious or benign.

If questionable activity is detected, it sends a warning to security administrators, said CEO Nicole Eagan in an interview with eWEEK at Black Hat. Darktrace’s Antigena product can also take action on its own to shut off the activity, Eagan said.

Darktrace this month is announcing a new version of the Enterprise Immune System, she said. In the new version, Darktrace will add a mobile app that will allow administrators to react to recommendations more easily. It will also include a new 3-D visualizer and an ask-the-expert feature that enables administrators to send signs of malicious activity to Darktrace for analysis by its experts.

Scot Petersen

Scot Petersen

Scot Petersen is a technology analyst at Ziff Brothers Investments, a private investment firm. Prior to joining Ziff Brothers, Scot was the editorial director, Business Applications & Architecture,...