LAS VEGAS—Even as artificial intelligence is having a bigger role in stopping malware and other cyber-threats, attackers are finding ways to get around it and even using AI as a way to enhance their own attack strategies.
“Can we break machine learning? The answer is, yes we can,” said Hyrum Anderson, Principal Data Scientist for security vendor Endgame, during his presentation at DefCon here last week. “It’s actually become quite fashionable to break machine learning.”
AI, or more specifically a form of AI known as machine learning, has been coded into a next-generation antivirus (AV) programs. Traditional forms of AV are based on signatures, which are identifiers of known security threats. But signature-based AV is not enough, because attackers can quickly change malware or disguise it enough to evade AV signatures.
This is where machine learning comes in and not a moment too soon, because 357 million new malware threats were detected in 2016, according to security vendor Symantec. Traditional antivirus programs simply cannot keep up.
AI-based threat detection systems are designed to catch anything that traditional AV misses, at least in theory. Machine learning models are not foolproof, however. They can determine only within a certain degree of confidence if a particular file is malicious or benign, Anderson explained. If attackers can learn how a machine-learning detection model works, they may be able to tweak their malware files enough that they can sneak through.
“The idea is to insert a file that our model knows is malicious with high confidence, and make a few subtle changes to the bytes or modify elements that don’t break the file format or its behavior and then trick our model into thinking that’s benign,” said Anderson.
AI on the attack
AI methods are also being applied directly by attackers as a means to aggregate and analyze data to help target and customize their attacks. “We are now seeing systematic attacks against industry sectors” using AI methods, said Vincent Weafer, vice president of McAfee Labs.
With new cloud-based models and compute engines, machine learning is becoming inexpensive and accessible to whoever wants to use it. At the Black Hat conference here, McAfee announced that its flagship product, McAfee ATD (Advanced Threat Defense) 4.0, is now augmented with machine learning models.
Another company, Darktrace, applies machine learning to network security. Darktrace’s product, dubbed the Enterprise Immune System, creates a model based on normal usage for a network and then applies an AI system that can determine if certain activity is malicious or benign.
If questionable activity is detected, it sends a warning to security administrators, said CEO Nicole Eagan in an interview with eWEEK at Black Hat. Darktrace’s Antigena product can also take action on its own to shut off the activity, Eagan said.
Darktrace this month is announcing a new version of the Enterprise Immune System, she said. In the new version, Darktrace will add a mobile app that will allow administrators to react to recommendations more easily. It will also include a new 3-D visualizer and an ask-the-expert feature that enables administrators to send signs of malicious activity to Darktrace for analysis by its experts.
The future of AI
Besides being able to detect malware that is not registered with a signature, machine-learning systems will spare vendors and security administrators the task of constantly updating their threat detection systems. While signature-based systems must be updated daily, AI models have a longer shelf life—as much as six months before needing to be adjusted, said Homer Strong, Director of Data Science at Cylance.
He also said that the industry just getting started with using AI to augment security. “Cylance was ahead in applying well-known machine learning techniques without a lot of original research. But now in special domains like security, companies are starting to invest in original research,” he said, adding that AI algorithms will continue to evolve and improve as more AI experts enter security field.
Experts say that as good as AI is getting, it remains only one part of the best practice of “security in depth.” Endpoint and network security, both traditional antivirus and AI-based, must be coupled with other forms of protection, including intrusion detection, encryption, data loss prevention and many others, including the emerging role of “threat hunter.”
But before users begin to apply those strategies, they must still tackle the biggest problems out there, which include software patching and system updates, file backups, and user training. At Black Hat, the conference released its latest attendee survey that showed that the number one concern (38%) of security administrators is end users who violate security policy and are too easily fooled by social engineering attacks, up from 28 percent the year before.
Some things never change.
Scot Petersen is a technology analyst at Ziff Brothers Investments, a private investment firm. He has an extensive background in the technology field. Prior to joining Ziff Brothers, Scot was the editorial director, Business Applications & Architecture, at TechTarget. Before that, he was the director, Editorial Operations, at Ziff Davis Enterprise. While at Ziff Davis Media, he was a writer and editor at eWEEK. No investment advice is offered in his blog. All duties are disclaimed. Scot works for a private investment firm, which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.