Modern applications are often built with a jigsaw puzzle array of pieces that all fit together to enable a full stack of functionality.
The challenge from a security perspective is that not every organization or application properly validates all the different puzzle pieces, which can be a risky proposition. Case in point, on Nov. 26 it was publicly revealed that a widely deployed open-source Node.js programming language module known as event-stream had been injected with malicious code that looked to steal cryptocurrency wallets. The event-stream library has over two million downloads.
“Version 0.1.1 of flatmap-stream is considered malicious,” the NPM project wrote in an advisory. “This package runs an encrypted payload that we currently do not have further information on. If you happen to find this package in your environment you should respond as if the system was compromised.”
As it turns out, the encrypted payload takes specific aim at cryptocurrency users. In a GitHub issue discussion, developer Ayrton Sparling commented that the target seems to have been identified as copay related libraries and the malware only executes successfully when a matching package is in use. Copay is a bitcoin cryptocurrency walled that is developed by BitPay.
“We have learned from a Copay GitHub issue report that a third-party NodeJS package used by the Copay and BitPay apps had been modified to load malicious code which could be used to capture users’ private keys,” BitPay wrote in an advisory. “We are still investigating whether this code vulnerability was ever exploited against Copay users.”
So how did the malware get into the code in the first place? As it turns out, the original maintainer of the flatmap-stream module simply gave the alleged attacker rights to maintain the module.
“He emailed me and said he wanted to maintain the module, so I gave it to him,” developer Dominic Tarr wrote in a GitHub comment. “I don’t get anything from maintaining this module, and I don’t even use it anymore, and haven’t for years.”
Analysis
So there you have it, a widely deployed module was left wide open to malicious code injection, as the original maintainer gave the alleged hacker full rights to maintain the code. It seems absurd that it was that easy for code to be manipulated in a malicious way, but those are the facts. Now on the positive side, since this project is open-source, the code commits are all a matter of public record (albeit some of the code was encrypted), making it possible for users and researchers to determine root cause and remediate the issue.
For this particular project, there was no DevSecOps gating, that is, some form of automated testing as part of the build process to help validate that nothing malicious is happening. Scanning application code for bad patterns and potential issues is a solid best practice, and even if the upstream project doesn’t do that, it’s a good idea for application developers to run some form of scanning on their own applications.
The supply chain for all the different components that make up applications has been a subject of scrutiny by security researchers in recent years, as it’s often a vulnerable area. Simply pulling code from GitHub and trusting that it isn’t going to do something malicious is not a best practice for any enterprise.
Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.