How much security is in long-term support?
Everyone loves the idea of long-term support — LTS for short — where a vendor promises you to provide security updates for a few years: little friction and strong security. Does long-term support live up to that dream? It does not. What makes me say that?
The dream island named "long-term support" knows five types of monsters:
Meet monster type 1: "Out of scope"
Long-term support comes with a scope.
For example, in Ubuntu LTS repositories main
and restricted
are in scope
and universe
is out of scope.
When I asked the Ubuntu Security team in March 2020
about the four unfixed CVEs in liburiparser1 0.8.4-1
of Ubuntu 18.04.4 LTS
that all had fixes available upstream they were not responding "We just fixed it" or even "We will fix it";
instead they were explaining to me that because
uriparser was in universe
it is "community maintained", i.e. not part the LTS deal.
To Canonical that may make sense, but it was an eye-opener for me at the time.
It is worth a side note here that Debian does not make two-classes distinctions like that
in Debian stable
,
which is one reason I started recommending Debian over Ubuntu.
To summarize: Many vulnerable packages are considered out of the scope of long-term support.
Meet monster type 2: "Never labeled as security upstream"
Not every project labels all security fixes as security fixes and goes forward registering CVEs for them. That's a mistake because without a security label, bugfixes do not get backported. So they remain unfixed — oops! Why are many security fixes not labeled as security fixes?
First, requesting CVEs is work, e.g. you need to carefully fill out a form for each CVE request answering questions like:
- What is the impact?
- Which versions of the software are affected?
- How would the vulnerability be attacked?
Second, if you are the one that caused the vulnerability, it takes a mindset that does not consider admitting to a vulnerability as shameful. Not everyone is in a place where admitting to mistakes feels safe.
Third, not labeling something as a vulnerability can reduce the number of eyes on it and so one could hope, that fewer attackers find out about the vulnerability. But that's security by obscurity, which does not help against attackers that are serious about a target, and takes away options from everyone else.
To summarize: What was never labeled as a security fix in the first place will not be backported by long-term support; it will be known to attackers but not be fixed.
Meet monster type 3: "Backporting too complex"
Many security fixes are small and feasible to backport but some are not. libexpat has had a few of these; one example is my fix to billion laughs attacks for libexpat 2.4.0 (CVE-2013-0340) where the fix was not backported to RHEL 8 despite being affected, likely because of complexity. In Red Hat's own words from a FAQ answer about will-not-fix decisions:
A "will not fix" status means that a fix for an affected product version is not planned or not possible due to complexity, which may create additional risk.
To summarize: If a security fix is considered too complex to backport, long-term support will be left vulnerable.
Meet monster type 4: "Backport arrived too late"
Unless a coordinated disclosure effort (sometimes called "responsible disclosure") is delaying the publication of a vulnerability and the related upstream fixes until all backporting work has been completed, backported patches arrive later than patches to the latest release of the software. That time window can be used by attackers to attack long-term support systems in particular.
It should be noted that coordinated disclosure is work, in particular when many parties are involved. It seems fair to say that coordinated disclosure is often limited to the most severe of vulnerabilities or when someone offers to do the coordination work on behalf of the security researcher. CERT/CC VINCE may be able to help security researchers with tiring coordination and communication.
To summarize: The delay of backporting creates additional opportunities for attackers (that do not exist with rolling release approaches).
Meet monster type 5: "Criticality threshold not met"
In some contexts, there are thresholds for a minimum CVSS score that need to be met — e.g. CVSS >=7.0 — for an obligation to backport (or update). So any vulnerability classified as CVSS 5.5, in that context, would come with a free pass to ignore. A very bad idea in my book, in particular, given how often CVSS scores are off — either too high or too low — in evaluation.
I'm not sure what Red Hat's precise cutoff point is for MinGW packages, but they do have a (confirmed) policy
to only fix vulnerabilities considered critical in MinGW packages, and it shows;
mingw-expat
in RHEL 8 is missing multiple security fixes.
It is worth noting, that when security researchers are not sure whether remote code execution is possible — a common case —, they should mention that they cannot rule out code execution.
To summarize: A vulnerability may be considered not critical enough to fix by vendors, and evaluation of criticality is complex and unreliable.
Is any of that surprising?
If your answer is "yes" I would be curious to hear about the consequences you are drawing.
Sebastian Pipping