The Cyber Resilience Act: What Software Vendors in the EU Market Need to Change

#Cyber Resilience Act software
Sandor Farkas - Founder & Lead Developer at Wolf-Tech

Sandor Farkas

Founder & Lead Developer

Expert in software development and legacy code optimization

A Berlin-based payments SaaS we audited last month had three things in good order: a working SOC 2 report, a recent penetration test, and a documented incident response plan. They were not, however, ready for the Cyber Resilience Act. Their on-premise installer for the bank-side connector — a side product that maybe 200 enterprise customers run — would, by December 2027, count as a "product with digital elements" placed on the EU market. That installer had no SBOM, no documented secure-by-default configuration, no published vulnerability disclosure policy, no defined support window, and no plan for the 24-hour reporting clock that starts ticking the moment they learn an attacker is exploiting one of their bugs in the wild.

This is the gap the CRA is about to expose across the European software market. The Cyber Resilience Act software regime — Regulation (EU) 2024/2847 — is in force since December 2024, vulnerability reporting obligations apply from September 11, 2026, and the bulk of the requirements bite from December 11, 2027. Most vendors I talk to in 2026 are roughly where the GDPR market was in 2017: they have heard of it, they assume it will be inconvenient, and they have not yet mapped what it means for their actual codebase, release process, and support contracts.

This post is a working summary for product, engineering, and compliance leads at mid-size software vendors selling into the EU: what the CRA actually covers, what changes about how you build and ship, what the exemptions look like, and the steps that need to be in motion this year rather than the next.

What "Products With Digital Elements" Actually Means

The CRA applies to "products with digital elements" placed on the EU market. The definition is deliberately broad: any software or hardware product, and its remote data processing solutions, whose intended or reasonably foreseeable use includes a data connection. In practice this captures almost every commercial software product that ships outside a managed cloud — desktop applications, mobile apps, on-premise servers, IoT firmware, browser extensions, command-line tools, libraries distributed for production use, and the back-end components that those products call out to.

Pure SaaS sounds like it might fall outside the scope. But "remote data processing solutions" that are necessary for a product with digital elements to perform its function are explicitly inside the scope. If you sell an on-premise client that talks to your hosted API, both the client and the API behind it are regulated. If you sell pure SaaS without any client-side component, you are likely outside the CRA but still inside the parallel NIS2 regime if you qualify as an essential or important entity. The two regimes are designed to interlock — the legal reading for a hybrid product is often "both apply, to different parts of the system."

Three categories matter for compliance burden. The default category covers most products and allows self-assessment of conformity. Important products — Class I and Class II — include things like password managers, network management systems, identity management, browsers, operating systems, microcontrollers used in critical contexts, and other security-sensitive categories. They face stricter assessment paths, and Class II products require third-party conformity assessment by a notified body. Critical products — currently a narrow list including hardware devices with security boxes and smart meter gateways — face the strictest regime, including potential European cybersecurity certification under the EU Cybersecurity Act.

Most B2B software vendors I work with land in the default category, but the categorisation deserves a deliberate decision rather than an assumption. A team that sells "just an admin tool" may be surprised to find their functionality places them in the important class once the implementing acts are finalised.

Security-by-Default and Secure-by-Design as Hard Requirements

The CRA's substantive security requirements live in Annex I, and they read like a security baseline that most mature engineering teams already aspire to. The difference is that the CRA makes them legal obligations rather than aspirations.

Products must be delivered with a secure default configuration. Default credentials, permissive default permissions, and unauthenticated network services exposed out of the box become regulatory non-conformities, not just bad practice. The default state of a freshly installed product has to be the safe state.

Products must be designed to minimise the attack surface: only the necessary services, ports, accounts, and capabilities, with the rest disabled until explicitly enabled. This is the principle of least functionality applied at the product level.

Products must support security updates throughout a defined support period — five years or the expected product lifetime, whichever is shorter, with no upper cap if the product is genuinely supported longer. Updates must be free for security fixes, must be installable by a clear and accessible mechanism, and where technically possible, must be capable of automatic installation. A product that requires a paid contract to receive security patches is no longer a viable model under the CRA.

Products must implement protection against unauthorised access through appropriate authentication and authorisation, protect the confidentiality and integrity of stored and transmitted data, limit data collection to what is necessary, and log security-relevant events in a way that supports incident detection. None of this is exotic; what is new is that absence of these controls becomes a compliance gap to remediate rather than a finding to debate in a pen-test report.

A Software Bill of Materials in a machine-readable format is required as part of the technical documentation. The SBOM does not need to be public, but it has to exist, has to be current, and has to be available to market surveillance authorities on request. Vendors that today cannot produce a list of their direct and transitive dependencies on demand have a real engineering project ahead of them. CycloneDX and SPDX are the formats the market is converging on; either is acceptable.

This is the layer where existing codebases — particularly older PHP, Java, and .NET products that have accumulated dependencies over a decade — will need a serious pass. A legacy code optimization effort scoped against the CRA Annex I requirements is one of the more practical ways to surface and close these gaps before they become non-conformity findings.

CE Marking for Software: A New Conformity Assessment

The CRA brings software into the CE marking regime that has applied to physical products in the EU for decades. By December 2027, products with digital elements placed on the EU market must carry the CE mark, and behind that mark must sit a Declaration of Conformity and a defined conformity assessment path.

For default-category products, this is internal: the vendor performs the assessment themselves, prepares technical documentation according to Annex VII, signs the Declaration of Conformity, and affixes the CE marking. The technical documentation must include the SBOM, a description of the design and development process, the cybersecurity risk assessment, evidence of the secure-by-default measures, the vulnerability handling process, and the documentation of conformity with each applicable Annex I requirement.

For important Class II products, an internal assessment is no longer sufficient — a notified body must be involved, either by reviewing the technical documentation or by certifying the quality system. Vendors in this category should be identifying notified bodies in 2026 because capacity is finite and the rush to certification will arrive in 2027.

The CE mark on software is a strategic decision as much as a compliance one. Vendors that obtain it early gain a real procurement advantage in the EU market — purchasers will increasingly demand the Declaration of Conformity as part of standard vendor onboarding, and a missing CE mark on a connected product post-2027 is grounds for market surveillance enforcement, not just lost deals.

The 24-Hour Vulnerability Reporting Clock

The CRA obligation that arrives soonest — September 11, 2026 — is also the one most likely to catch teams unprepared. From that date, manufacturers must report any actively exploited vulnerability in their product to ENISA and to the relevant national CSIRT within 24 hours of becoming aware, with a follow-up notification within 72 hours and a final report within 14 days. Severe incidents affecting the security of the product trigger a similar reporting cascade.

Twenty-four hours is short. It is shorter than the GDPR's 72-hour breach notification, and it triggers on awareness of exploitation, which means a single credible report from a customer or a researcher starts the clock. Engineering teams that today learn about exploitation through a Slack message that bounces around for two days before reaching a decision-maker will not meet this requirement.

Three things have to be in place by mid-2026 for this to work. A coordinated vulnerability disclosure policy published at a discoverable location — the security.txt convention is the de facto standard — that researchers can find and use without going through a sales team. An internal triage and decision process that takes a vulnerability report from intake to a product security decision in hours, not weeks, with a named on-call rotation and an escalation path. And a reporting workflow that knows which national CSIRT to notify for which product, has the ENISA single reporting platform credentials ready, and produces a defensible audit trail.

Vendors should also use the next eighteen months to converge their internal vulnerability handling on the ISO/IEC 30111 and ISO/IEC 29147 patterns. Both are referenced in the CRA's harmonised standards work and align well with what mature security teams already do; small teams can meet them with disciplined process rather than expensive tooling.

Open-Source Software: Exemptions and the Steward Regime

The CRA's treatment of open-source software was the most contested part of the legislative process, and the final text reflects a careful compromise. Free and open-source software developed or supplied outside the course of a commercial activity is excluded from the CRA's product obligations. A maintainer publishing a library on GitHub for community use is not a manufacturer under the Act.

The line shifts when the software is monetised. Selling a commercial OSS distribution, offering paid support, distributing under a dual-licence model where one tier is paid, or shipping OSS as part of a commercial product all bring the relevant entity inside the regime — usually as a manufacturer for the commercial offering, with the upstream OSS still benefiting from the exemption.

A new role created by the CRA is the open-source software steward: legal entities that systematically support the development of free and open-source software intended for commercial use — foundations like Apache, Eclipse, or the Python Software Foundation. Stewards face a lighter regime than manufacturers, essentially a duty to apply a documented cybersecurity policy and cooperate with authorities.

For a vendor whose product depends on OSS, the practical implication is unchanged: you remain responsible for the security of the OSS you ship. The exemption protects upstream maintainers, not downstream commercial users. Your SBOM, vulnerability handling, and update obligations apply to your full product, including its OSS components.

What to Do in 2026

A workable sequence for a mid-size software vendor that has not started CRA preparation:

Map every product and every component you ship against the CRA scope. Decide for each whether it is a product with digital elements, whether it is default, important, or critical category, and whether your role is manufacturer, importer, or distributor. Document the reasoning. This is the foundation everything else hangs on.

For each in-scope product, run a gap analysis against Annex I and the vulnerability handling requirements of Annex II. The output is a remediation backlog that engineering can prioritise alongside its normal roadmap. A focused code quality audit is the most efficient way to surface the secure-by-default and least-functionality gaps in an existing codebase.

Build the SBOM pipeline. Generate, sign, and store an SBOM for every release artefact, in CycloneDX or SPDX. Wire it into CI so it cannot regress. Make sure your dependency update cadence keeps the SBOM meaningfully current.

Publish a coordinated vulnerability disclosure policy and a security.txt by Q3 2026 at the latest. Stand up the on-call and triage process behind it. Run a tabletop exercise against the 24-hour reporting requirement before September 2026 — discover the friction in a drill rather than during a real exploitation event.

Define and publish your support window. Update contracts, end-of-life policies, and the customer-facing documentation that explains how customers receive security updates. A product that quietly stops receiving updates after a vague period is no longer compliant.

Begin the technical documentation pack required for the CE marking process. By the time December 2027 arrives, the documentation should be complete, current, and signed off, not assembled in the final quarter.

Closing

The Cyber Resilience Act is not a checklist regulation. It is a redesign of how software is built, shipped, and supported in the EU market, comparable in scope to what GDPR did for personal data processing. Vendors that treat it as a legal exercise to be solved in late 2027 will find themselves trying to retrofit secure-by-default design, SBOMs, and a 24-hour incident reporting capability into a release process that was never built for them. Vendors that start now have eighteen months to fold the obligations into normal engineering work and emerge with a product that is genuinely more defensible — and easier to sell into European procurement.

Wolf-Tech audits codebases against the Cyber Resilience Act Annex I requirements, helps EU and EU-adjacent vendors build SBOM pipelines and vulnerability disclosure processes, and supports legacy product modernisation aligned with CRA conformity. Contact us at hello@wolf-tech.io or visit wolf-tech.io for a free consultation.