Supply Chain Security for Mid-Size SaaS: SBOM, Dependency Scanning, and NIS2 Readiness

#supply chain security SaaS
Sandor Farkas - Founder & Lead Developer at Wolf-Tech

Sandor Farkas

Founder & Lead Developer

Expert in software development and legacy code optimization

A SaaS company serving European logistics software discovered in early 2026 that a popular open-source parsing library had shipped a malicious release for eleven days before the maintainer noticed. The library had been in their dependency graph for three years and was used by four internal services. When their enterprise customer's security team asked for an SBOM and a written statement confirming the affected version was not in production, the CTO had neither. Reconstructing the dependency graph from package-lock files, establishing which deployments ran which version, and drafting the customer communication took four engineers two and a half days. The fix itself took forty minutes.

Supply chain security for SaaS is no longer an edge case. The XZ Utils backdoor in 2024, the SolarWinds aftermath still echoing through procurement conversations, and now NIS2's explicit supply-chain obligations under Article 21 have put dependency governance on the same priority list as authentication and secrets management. This post covers what a practical supply chain security programme looks like for a mid-size SaaS team: SBOM generation that runs in CI without slowing your pipeline, dependency scanning with a triage process that actually gets acted on, provenance attestation for your own build artefacts, and the incident response steps when a dependency turns out to be compromised.

What Supply Chain Security Actually Means for a SaaS Team

The term "software supply chain" covers anything that flows into your production build that you did not write yourself: open-source libraries, base container images, build tools, internal packages from other teams, third-party APIs whose SDKs you bundle, and infrastructure-as-code modules from public registries. An attack on any of these can put malicious code into your product without any of your own engineers touching a vulnerability.

For most mid-size SaaS teams, the realistic risk surface is narrower than the conference-talk version: it is primarily the open-source dependency graph, the container images you pull at build time, and any internal shared libraries without a formal release process. That narrower scope is useful — it means you can build a defensible programme without a dedicated security team, if you wire the right tools into the workflow your engineers already use.

The NIS2 angle adds regulatory pressure. Article 21 of the directive requires in-scope organisations to address supply chain security as a mandatory cybersecurity risk-management measure, including assessing vulnerabilities in direct suppliers and their secure development practices. If your SaaS serves European customers in covered sectors — logistics, manufacturing, healthcare IT, financial infrastructure — your procurement conversations already include supply chain questions. Having an SBOM, a current scan report, and a documented response process is increasingly a contract requirement, not a nice-to-have.

Generating SBOMs in CI Without Making It Somebody's Weekend Project

A Software Bill of Materials is a structured inventory of the components in a software build: libraries, versions, licences, and increasingly dependency relationships and provenance metadata. The two dominant formats are CycloneDX and SPDX — both are machine-readable JSON or XML, both are accepted by the tooling enterprise customers use to ingest supplier SBOMs.

The practical approach for a PHP/Symfony or Node/Next.js stack is to generate the SBOM as part of the build pipeline rather than as a manual step. For PHP projects, cyclonedx-php-composer generates a CycloneDX SBOM from composer.lock in under two seconds. For JavaScript/TypeScript, @cyclonedx/cyclonedx-npm does the same from package-lock.json or yarn.lock. The output should be committed to a known location in the build artefact — a container layer, an S3 path keyed to the image tag, or a GitHub Release asset — so that when a customer asks for the SBOM for a specific version, you can retrieve it without regenerating it from current code.

The step in a GitHub Actions workflow looks approximately like this: after the test suite passes and before the image is pushed, run the SBOM generator, attach the output to the release or upload it to a known storage path, then continue with the image push. Total added time in our Symfony projects is typically under thirty seconds. The SBOM for the same version as the deployed image is then retrievable by tag.

One decision you need to make upfront: scope. A full transitive SBOM for a Symfony application can include several hundred entries, many of which are devDependencies that never reach production. Filtering to production dependencies only produces a smaller, more auditable document. Most tooling supports this with a single flag. Customers doing NIS2-driven procurement generally want the production-scope SBOM; regulators doing audits may ask for the full one. Generating both and storing both is the cleanest approach.

Dependency Scanning That Generates Actions, Not Backlog Noise

The problem with dependency scanning at most teams is not the scanner — it is the triage process. Running Dependabot, Snyk, or Trivy without a policy for acting on their output produces a growing list of CVEs that gradually stops being read. Within three months the team has tuned out the alerts and the scanner has become compliance theatre.

A usable dependency scanning programme has four components: a scanner integrated into CI that fails builds on critical vulnerabilities in production dependencies; a triage policy defining which severity levels require a fix by which deadline; a regular review cadence (not a queue that grows indefinitely); and a suppression process for false positives and irrelevant findings that is documented and auditable.

For critical CVEs in production dependencies, the policy should be straightforward: the build fails and the release is blocked until the dependency is updated or a documented exception is approved. For high and medium CVEs, a SLA-based triage is more practical — high within two weeks, medium within thirty days, low tracked but not time-bounded. The exact thresholds matter less than the fact that they are written down, enforced, and produce evidence for an audit.

The triage step is where most teams underinvest. A CVE in a library is often not exploitable in your specific usage — the vulnerable code path may only trigger on input patterns your application never generates, or the library may be used only at build time. Marking a CVE as not exploitable with a documented rationale is a legitimate outcome of triage and produces a cleaner audit trail than silently ignoring it. Tools like VEX (Vulnerability Exploitability eXchange) documents, which attach exploitability statements to CVEs in a CycloneDX SBOM, are gaining traction in enterprise procurement as the preferred way to communicate this.

A code quality audit often reveals that teams have dependency scanning configured but no triage policy attached to it — the scanner output goes to a dashboard, but the dashboard is not connected to a remediation workflow. Reconnecting those two things is a one-day task once the policy is agreed.

Provenance Attestation and the SLSA Framework

An SBOM tells you what is in a build. Provenance attestation tells you how that build was produced — which source commit, which build system, which pipeline, with what inputs. The SLSA (Supply-chain Levels for Software Artefacts) framework defines four increasingly rigorous levels of provenance, from basic documentation of the build process (SLSA 1) through hermetic, reproducible builds on a hardened build service (SLSA 4).

For a mid-size SaaS team, SLSA 2 is the practical target: a build triggered from version control, using a hosted build service (GitHub Actions, GitLab CI), with a generated and signed provenance document. This rules out the most common supply chain attack vectors — a developer's compromised laptop injecting malicious code into a build, or an unofficial build process that bypasses security checks — without requiring build infrastructure that only makes sense at hyperscaler scale.

The tooling is more accessible than the SLSA specification makes it sound. GitHub Actions supports SLSA provenance generation natively through the slsa-github-generator action, which produces a provenance attestation signed with the Sigstore Rekor transparency log. For container images, cosign from the Sigstore project signs the image and attaches attestations that consumers can verify before pulling. Both tools are free, open-source, and add a few seconds to a typical CI run.

The provenance document then travels with the artefact: a container image in your registry has an associated cosign signature and attestation. An enterprise customer who wants to verify that your published image actually came from your declared build pipeline and source commit can do so without trusting your word. This is increasingly what "supply chain assurance" means in NIS2-driven supplier questionnaires.

When a Dependency Is Compromised: The First Ninety Minutes

Supply chain incidents have a different shape from infrastructure incidents. The typical scenario is: a CVE is published or a malicious release is confirmed for a library in your dependency graph, you do not immediately know which services are affected or which versions are deployed, and your enterprise customers start asking questions before you have the answers.

The first task is scope determination. If you have SBOMs stored for each deployed version (as described above), this is a database query: which services include this library, and which version is running in each environment? If you do not have SBOMs, you are reconstructing this manually from deployment logs, package files in running containers, or rolling production builds — a process that can take hours. This is the operational argument for SBOM generation that does not depend on the library being published as a vulnerability: you want the inventory to exist before you need it.

The second task is assessing exploitability. Not every vulnerability in a dependency translates to exploitability in your product. Check whether the affected code path is actually called in your usage, whether the library is a production dependency or a build/dev dependency, and whether any mitigating controls (input validation, restricted network access, WAF rules) reduce the effective exposure. Document this assessment even if the conclusion is "not exploitable in our configuration" — the documentation is what you send to customers, not just the conclusion.

The third task is communication. Enterprise customers under NIS2 may have their own reporting obligations if a supplier incident affects their operations. Proactive communication — acknowledging you are aware of the CVE, stating your assessed exposure, and committing to a timeline for remediation or a final determination — is significantly better for the relationship than waiting to be asked. A communication template prepared in advance for "CVE in a dependency" is worth having. The application development teams we work with that have gone through one supply chain incident invariably produce a runbook after it; it is more useful to produce one before.

The fourth task, once scope and exploitability are established, is remediation: update the dependency, verify the fix, and redeploy. If the library has no fixed version yet, assess whether it can be temporarily removed, replaced with an alternative, or mitigated at the application layer while a fix is awaited. Document the decision and the timeline.

Under NIS2, if the incident meets the threshold of a "significant" incident — one causing or capable of causing severe operational disruption or affecting other entities — the 24-hour early warning to the national CSIRT applies. Having your supply chain incident integrated into your broader incident response runbook, with a clear trigger for the NIS2 reporting assessment, prevents the regulatory step from being missed under pressure.

Assembling the Evidence Pack

Enterprise customers doing NIS2-driven supplier due diligence increasingly ask for a standard set of artefacts. Having these ready — rather than regenerating them for each questionnaire — saves significant engineering time and shortens sales cycles. The core pack for a mid-size SaaS in 2026 is: a current production-scope SBOM (CycloneDX JSON preferred), a VEX document or equivalent suppression register for CVEs marked not exploitable, a scan report from your dependency scanner showing the current state and triage decisions, a short description of your SBOM generation process and where provenance attestations are published, and a summary of your response process for supply chain incidents.

This pack, updated on a regular schedule and ready to share on request, is what converts a procurement security questionnaire from a three-week process into a two-day response. It also forms the core of the supply chain security documentation that a NIS2 audit would expect to see.

Wolf-Tech works with Berlin-based and EU SaaS teams to design and implement supply chain security programmes — SBOM pipelines, triage policies, provenance attestation, and incident runbooks — that are proportionate to team size and defensible against NIS2 scrutiny. If your procurement conversations are already raising supply chain questions, or you want to get ahead of the audit rather than react to it, reach out at hello@wolf-tech.io or visit wolf-tech.io to set up a conversation.