EU CRA and the Open Source Ecosystem: A Suggestion

UPDATE: On December 1st the EU agreed on a version of the Cyber Resilience Act that appears to have substantially addressed the concerns in the post below. Further analysis awaits, but do know that the text that follows is now mostly of historical interest!

UPDATE 2: Here is the final compromise text of the Cyber Resilience Act.

UPDATE 3: Here is an analysis of what it means for open source.

Open source is important, and we want all software to be secure, including the open source parts. Most commercial products, measured by lines of code, are majority based on open source, so it really matters.

As I wrote earlier, the EU has written the Cyber Resilience Act (CRA) to improve the security of hardware and software being marketed to the EU. There are lots of problems with this act (as also described in this second post), but here I want to delve into some specific issues surrounding open source and the open source ecosystem on which most products are based.

Luca Bertuzzi and I also discussed the ideas on this page in the Euractiv Tech Brief podcast (20 minutes).

The short version: different versions of the CRA contain ambiguous and difficult attempts to regulate some open source projects/foundations directly. This is fraught with difficulties, since much of open source is informal in nature, and being available free of charge, has no budget for a regulatory department. Yet, serious commercial vendors reap the benefits of this free open source.

This post proposes to modify the due diligence clause of the CRA to mandate that commercial manufacturers that rely on open source components invest in the security of that open source, perhaps as part of industry consortia helping create security attestation documents. This leads to better open source, clear allocation of responsibility and puts the cost in places where people are making money. And even better: the CRA already contains the core of this idea. If the EU continues with its current ambiguous regulation, this will certainly damage open source innovation in Europe.

For the longer version, please do read on.

While we definitely want the most secure open source, we also do not want to oblige every open source developer to hire a compliance department, because that would be the end of the concept. The CRA could however also do other things to make open source more secure.

In its earlier versions the act appeared to be somewhat clearer: it would regulate software and hardware sold for actual money (or user data monetization). There were also explicit words that the CRA should not hamper open source development, but these words are gone now.

So there is room for doubt on how and when the CRA would apply to open source authors and projects. There’s a section on this ambiguity at the end of this document.

Getting the best security

If you buy a modern phone or TV, the majority of the source code of the software in there is in fact open source. Manufacturers build their own user interfaces and functionality on top of tons of open source infrastructure.


On an iPhone, go to Settings, General, Legal & Regulatory, Legal Notices. This lists all the (open source) components shipping with the phone. Try scrolling to the end, you won’t make it!

To improve security, work needs to be done on both the manufacturer specific code and the underlying components, whether open source or not.

The CRA already contains a distinction between manufacturer authored code and third party components (recital 18a, article 10(4)). The CRA says that manufacturers must comply with many rules (including ’essential requirements’) for their own software, but there is a distinct regime for components sourced from elsewhere. These must be subject to ‘due diligence’, for which the CRA offers the following suggestions, depending on the risk:

  • verifying that the manufacturer of a component has demonstrated conformity with this Regulation
  • verifying that a component is free from vulnerabilities registered in publicly accessible vulnerability databases
  • verifying that a component receives regular security updates

Improving open source

It is quite straightforward to regulate a consumer electronics company. They already have to adhere to a ton of product regulation. Adding more rules for software security is a burden to them, but it is a well trodden path.

It is however far harder to lay down the law on often informal open source projects, especially since they also tend to be scattered around the world and often have no budget or compliance department.

Some open source projects have foundations looking after their interests, and newer CRA proposals attempt to regulate these non-profits and the software they might be fostering. But since it is hard to define when an open source project is sufficiently organized to regulate it, everyone will live under a cloud of uncertainty. You might only find out the CRA applies to you because someone launches an infringement procedure.

Due diligence

Currently, manufacturers of products and software ship huge amounts of open source software (components), but mostly have no relation at all with the producers of that open source.

The user interfaces and communications of cars are mostly based on open source software, often without the authors even knowing about it. They do sometimes find out later if they buy such a car.

This is a pretty crazy situation.

With even the current CRA proposal, life here is going to change massively. Manufacturers will have to do something about their open source components, because of the due diligence requirement.

We could imagine that this could cause industry to develop a better relation with the open source projects it relies on so heavily.

It would for example be possible to form industry coalitions to collectively perform/fund due diligence, improvements or audits on major open source software projects.

With the right communication and legislation, the EU could foster and encourage these developments, and truly be a force for good. Open source would get better, and so would the devices and software built on top of that open source.

One way to make this happen is to add this idea to the CRA due diligence article. In this way, manufacturers could comply with their due diligence requirement by participating in such industry coalitions to improve and audit the security of open source. The EU would have to indicate that such coalitions would not be seen as problematic from a competition standpoint.

This would get us the desired result: safer software, safer open source, without having to get into fiddly definitions on which parts of open source could be regulated directly. Instead, we get industry to improve the components that it is currently using for free.

Note that this is not such a crazy idea - in the physical world, producers will always pay actual money for components. It stands to reason that if industry wants to rely on components supplied for free, someone should be paying for the security aspects.

It appears the EU has thought about this a bit already, and there is now language that they might create standards for voluntary ‘attestations’ of what elements of the CRA open source software adheres to. The text explicitly mentions that both developers and (industry) users of open source products could be creating such attestations.

In addition, article 11(7) in the Council and IMCO versions of the CRA mentions that users of components who find and fix security issues in software, should pass along those fixes to the (open source) author of the component.

This all seems to be in line with the main idea above: get manufacturers that use open source to invest in those components and contribute to improving their security.

What about open source software that is not used by manufacturers?

If a piece of open source software is not a component used by manufacturers, it might not benefit from the mechanism outlined above. However, such software is then also clearly not a significant security component, since no one apparently is shipping it. There are a small number of exceptions to this rule, like for example the excellent Thunderbird email client, which as far as I’m aware is not embedded into anything commercial. But this might be the exception that proves the rule.

Finally

In a way it is very good to see that the council, parliament and commission realise how important open source is for our (cyber)security. But I urge everyone to let the regulation flow through the entities making money using that open source. They have the budget to make things happen, and it would in any case be wonderful if manufacturers started to work more closely with the open source software they are basing their commercial products on.

If the EU manages to oblige industry to seriously enhance open source scrutiny and security, this would truly be wonderful.

This would be far better than the current ambiguity on which open source activities would have to be regulated directly. This would also incidentally put all of open source in Europe under a cloud of uncertainty, with dire consequences for our innovative abilities.

Some background

The proposed alternative: regulating open source directly

From various places it is clear that in newer versions of the CRA there are the confusing concepts of the ‘Open-source software steward’, and ‘collaborative development’.

In these new texts, the CRA partially exempts open source software that is developed standalone, without a steward, but in an a collaborative fashion. This collaborative fashion is however defined very oddly, where no single person is allowed to “exercise control” over the development of the software.

If I read this right, my own SQLiteWriter product (for example) would not be considered as “developed collaboratively”, since it is currently just me doing all the typing. But if I give a friend commit rights, and we only coordinate together, suddenly the CRA requirements go away again? I might need to get a friend.

There is also talk of “open-source software stewards”, which are meant to include certain open source software foundations. And this is a very weirdly defined thing. To be such a steward:

  • you need to be making open source products available on the market
  • you must have as purpose to ensure the viability of those open source products
  • the software must be developed through collaborative development (?!)
  • you must be providing infrastructure, including by hosting and managing software development collaboration platforms, by holding trademarks, hosting source code or software,
  • or be governing and managing free and open-source software products with digital elements;

This is hard to grasp - collaborative software development was defined as having no single entity being in control. Yet this new steward must be governing and managing that software, that is also developed collaboratively.

There is a substantial list of processes that must be implemented by the open-source software steward. This includes things like identifying and documenting vulnerabilities, remediate such vulnerabilities “without delay”, perform and document effective tests and reviews of security, provide proper descriptions of security updates.

Among other things, the open-source software steward must also have a coordinated vulnerability disclosure policy, and have infrastructure for distributing patches securely.

Open-source stewards must also collaborate with regulatory authorities if they come asking if the steward has in fact implemented all these things. Stewards must also cooperate if market surveillance authorities request that they eliminate cybersecurity risks in the products.

There are also reporting requirements to ENISA of any actively exploited vulnerabilities in the software. Security incidents must also be reported within 24 hours to ENISA, and without delay to users of the software.

There will not be any financial penalties if open-source software stewards to not comply. But their life could nevertheless become quite miserable.

Commercial activity?

On a closer reading, there now appears to be some worrying ambiguity in the council version of CRA. It has warm words for open source, and suggests that only software supplied as part of a “commercial activity” is covered, and notes that some typical open source things by themselves need not be a commercial activity.

But even after searching a lot, no one has however found a clear EU definition of commercial activity. And more worryingly, the documents that have been found suggest that non-profits could very well be regarded as performing commercial activities.

In the NON-BINDING blue guide we find a worryingly vague definition of commercial activity, full of exceptions, inclusions and a confusing footnote that says this does not apply to providing “intellectual property” (which is mostly what software is). It also states that ‘occasional’ deliveries by non-profits might not be a commercial activity. But what if you are a non-profit that does regular releases of software?

Since the whole operation of the CRA hinges on performing a “commercial activity”, we need absolute clarity on what this means.

Additionally, there should be mechanisms to find out if the EU regards specific things as commercial activities or not. The current text says this should be decided on a “product by product basis”, which is not great guidance.