Home Business Burning Questions on Log4j Answered

Burning Questions on Log4j Answered

Log4j is one of the many building blocks that are used in the creation of modern software.

by K. Vatsala Devi
168 views
Log4j caused chaos that is still ongoing

As we all are familiar by now, the Apache Log4j vulnerability shook the industry last December, and has created much chaos, which we are still witnessing today. Tim Mackey, Principal Security Strategist with Synopsys Cybersecurity Research Centre answers some tough questions on the aftermath of Log4j and its repercussions

Q: Are we seeing the end of the era of open source? Why?

A: While it might be tempting to view a major vulnerability as an indication of open source somehow being deficient, the reality is far from that. Open-source software is not more or less secure than commercial software, and in reality, most commercial software either includes or runs on open source technologies. Open source simply means that the software is developed in a manner where the source code is available to anyone who wants it.

What we are seeing with the Log4J response from the Apache Log4J team is exactly what we’d expect to see – a team that is taking the software they produce seriously and being responsive to the needs of their install base.

Considering that they are volunteers, such a response is indicative of the pride of ownership we often see within open source communities. In reality, an incident like Log4J is likely to improve open source development as a whole – much in the same way that Heartbleed improved development practices of both open and closed source development teams.

Q: Should there be a commercial replacement to protect companies from security implications after Log4j? Why?

A: This is a really common thought pattern, but one that misunderstands how software development really works. Every software component has what’s known as an “interface.” That interface might be in the form of an API if it’s a web service, or it might represent the functions that can be called when the component is loaded into an application.

What that interface looks like, how it behaves, what types of data it takes and in what format, are all examples of decisions the development team creating the component make as they write the component. Those decisions can also change as new features are implemented or as the code evolves. Log4J has an interface for each of its major versions, and they are not the same.

For a commercial replacement of any component to exist, there must be an available market for it. In the case of Log4J, the component logs message data to a log file. There is nothing sexy about it, and there are many other ways of logging data than just Log4J. That means there really isn’t much of a commercial software market for a replacement.

But, let’s assume someone was willing to make that investment to have a commercial replacement for Log4J. In that case, they would need to both re-implement the current Log4J interface and then write what is presumed to be “more secure code.”

The concept of open source somehow being less secure than commercial software may have been true decades ago, but that is far from true today, but let’s assume that our fictitious company was able to create a perfect logging utility that faithfully reproduced the Log4J interface. Once they’ve created that replacement, they need to market it and ensure that it doesn’t break any software using Log4J.

This is a fairly tall order, and there is a far simpler solution – that fictitious company could simply invest their time in improving Log4J. Since Log4J is open source, that means not only that the source code is readily available, but also that anyone who wants to modify it can do so under the terms of the Log4J license.

 So our fictitious company could take Log4J, create a branch of it using a process known as “forking the code,” and implement whatever fixes are missing. They then could submit their changes, and after a suitable review by the Log4J team, those fixes could be included in Log4J. At that point, anyone and everyone who uses Log4J could simply update their Log4J version as they’ve been doing.

This entire process is made possible through an open source concept known as “community engagement.” Under this process, an entity that depends upon Log4J (and you all know who you are now), can review, revise, and improve the code.

While it’s often thought that open source is just “free software,” somewhere, someone is investing their time in creating and improving upon each and every open source project. If the users and consumers of each project were to invest their time and energy in reviewing and improving the code they depend upon, then not only would we have more robust implementations, but those implementations would be more sustainable as technology moves along.

After all, many vulnerabilities are simply exploits of how code behaves on modern hardware or with modern paradigms, where that hardware or those paradigms simply didn’t exist when the code was originally written. For a perfect example of such a scenario, look at how the vulnerability known as Dirty Cow came to be.

Q: What kind of governance should be put in place, if any, to help identify and mitigate vulnerabilities sooner rather than later?

A: The idea of identification and mitigation of vulnerabilities requires us to define some roles up-front. Most people expect that their software suppliers, that is to say the people who produce the software they depend upon, to test that software. The outcome of that testing is a set of findings highlighting the weaknesses in the software the supplier produces.

In an ideal world, each of those weaknesses would be resolved prior to the software shipping. In the real world, some of those weaknesses will be fixed, some will be marked as “no plan to fix” and some will optimistically be fixed in a future release. What the list of weaknesses are, and which ones were fixed isn’t something a supplier typically divulges. No one tool can find all weakness, and some only work if you have the source code while others require a running application.

You’ll note that I didn’t mention the word vulnerability in this as it has a special and simple meaning. In software, a vulnerability is simply a weakness that can be exploited or that has a reasonable chance of exploitation. Most, but not all, vulnerabilities are disclosed via a centralised system known as the National Vulnerability Database, or simply the NVD. While the NVD has roots in the US, and is maintained by the US Government, the contents of the NVD are available to all and replicated in multiple countries.

From a governance perspective, monitoring for changes in the contents of the NVD is a good way of staying on top new vulnerabilities disclosures. The problem is that the NVD updates slower than media coverage, so with major vulnerabilities like Log4Shell, HeartBleed and Dirty Cow, the team discovering the vulnerability might create a branded name for the vulnerability in an effort to broaden awareness of the issue. Creating a governance policy that monitors for media coverage of a cyber-event isn’t a great practice.

So, if media coverage as an input to vulnerability management is a bad idea, and the NVD is a bit slow to provide all details, what is the best governance policy? That comes from a type of security tool known as “Software Composition Analysis”, or SCA. An SCA tool looks at either the source code for an application, or the executable or libraries that define the application, and attempts to determine which open source libraries were used to create that application. The listing of those libraries is known as an SBOM or Software Bill of Materials.

Assuming the SCA software does its job properly, then a governance policy can be created that maps the NVD data to the SBOM and you know what to patch. Except that there is still that latent NVD data to account for. Some of the more advanced SCA tools solve that problem by creating advisories that proactively alert when there is an NVD entry pending but where the details of that NVD entry are augmented by the SCA vendor. Some of the most advanced tools also invest in testing or validating which versions of the software are impacted by the vulnerability disclosure.

While SCA software can close the gap between disclosure and identification, it has a fundamental limitation. If the SCA software hasn’t scanned all of your applications, then at best it can only flag new vulnerability disclosures for a subset of your applications. From a governance policy perspective, it then becomes an IT function to identify all software and a procurement function to ensure that all software, including updates and free downloads, both comes an SBOM and that the SBOM is validated using SCA software.

Since software is available in both source and binary formats, it’s critical that governance teams heading down this path select SCA software that can effectively process software in all forms and formats. Such a governance policy would assist the identification of new vulnerability disclosures and the impact to the business but would leave the topic of effective mitigation to a different policy since mitigation would require application testing.

Q: Should better incentives be introduced to encourage the detection of vulnerabilities in OSS? Are there other incentives apart from financial ones?

A: Detection of vulnerabilities in open source isn’t a problem, but detection of software defects representing a weakness that could be exploited is an important topic. This distinction is important as vulnerabilities might not represent flaws in code, but instead flaws in deployment configuration or changes in hardware. It’s important to note that open source and closed source software have an equal potential for security issues, but with open source its possible for anyone to identify those issues.

Since its possible for anyone to identify issues, the question really is one of how many people are actually attempting to identify issues in open source and how diligent those efforts are. Part of the problem is a sentiment that has consumers or users of open source projects behaving as if they expect the open source project to behave like a commercial software vendor.

If you look at the issues list of any reasonably popular open source project on GitHub, you’ll see feature requests and comments about when certain problems might be resolved. The modern open source movement was founded on the principle that if you didn’t like the way the code was working, then you were free to modify it and address whatever gaps in functionality that were perceived to exist. Feature requests in GitHub issues and complaints about serviceability have an implicit expectation that a product manager is on the receiving end of those requests and that they will be added to a roadmap and eventually be released – all for free.

In reality, gaps in functionality and even in perceived bugs represent opportunities not to request free programming services but instead to contribute to the future success of code that is significantly important to the person complaining for them to complain. Yes, some people won’t know the programming language used by the project, but to expect other people to prioritize a complaint from an unknown third party over changes that solve problems for active contributors isn’t realistic. As much as anything, open source functions through the altruism of contributors.

Over recent years, we’ve heard core contributors for popular open source projects express frustration about the profits made by large businesses from the use of their software. While it’s easy to relate to someone putting their energy into a project only to have a third party profit from the efforts, the reality is that if that third party is profiting from the efforts of an open source development team, then they should be contributing to its future success. If they don’t then they run not only the risk that the code in question might change in ways they didn’t expect, but also that when security issues are identified and resolved that they might have delays in applying those fixes.

After all, if a business isn’t taking the time to engage with teams creating the software that powers their business, then it’s likely they don’t know where all the software powering their business originates and can’t reliably patch it. 

Related Articles

We use cookies to improve user experience, and analyze website traffic. For these reasons, we may share your site usage data with our analytics partners. By clicking “Accept Cookies,” you consent to store on your device all the technologies described in our Cookie Policy. Accept Read More

ESPC on the go

FREE
VIEW