The EU AI Act in Practice: A Compliance Playbook for SaaS Teams

#EU AI Act compliance
Sandor Farkas - Founder & Lead Developer at Wolf-Tech

Sandor Farkas

Founder & Lead Developer

Expert in software development and legacy code optimization

A mid-size Berlin SaaS team shipped an AI summarization feature in early 2025 — a thin wrapper over GPT-4 that condenses customer support tickets. Useful, tightly scoped, not controversial. In spring 2026 their largest enterprise prospect sent a procurement questionnaire with a dedicated EU AI Act section. Twenty-three questions. The team answered maybe six confidently. The deal stalled for four months while legal, engineering, and product scrambled to build the documentation the regulation already required.

This is the pattern EU AI Act compliance is following in 2026. The regulation is not hypothetical anymore — the general-purpose AI provisions became applicable in August 2025, the bulk of the Act's prohibitions and obligations are phased in through 2026 and 2027, and enterprise buyers are using AI Act readiness as a procurement filter. Teams that treated the Act as "EU bureaucracy for the big AI labs" are discovering that shipping a Claude or GPT integration in a B2B product puts them squarely inside its scope as a deployer — sometimes as a provider — with real obligations attached.

This post is a practical playbook for mid-size SaaS teams: what the Act actually requires of you, how risk classification works when you are integrating someone else's model, what documentation and transparency measures you need in place, and a compliance checklist you can run against your own product.

Who the Act Covers — and Why "We Just Call an API" Is Not a Defense

The most common misunderstanding is that the EU AI Act only applies to organisations training foundation models. It does not. The regulation defines several roles, and two of them apply to almost every SaaS team shipping AI features.

A provider is anyone who develops an AI system or has one developed under their name or trademark and places it on the EU market. If your product is marketed as an AI feature under your brand — even if under the hood it calls Anthropic, OpenAI, Mistral, or Google — you may be a provider of that AI system. The underlying model provider (Anthropic, OpenAI, etc.) is a separate GPAI-model provider with their own, different obligations.

A deployer is anyone using an AI system under their authority in the course of a professional activity. If you embed an AI feature inside your SaaS and your customers use it, you are a deployer. Your enterprise customers who use the AI feature in their workflows are often also deployers, which is why their procurement teams are asking you for transparency information they need to fulfil their own obligations.

The Act applies extraterritorially. A US-headquartered SaaS serving EU customers is within scope. A Swiss SaaS is within scope when selling into the EU. "We don't have an EU entity" is not an exemption.

Being a provider of a narrow, fine-tuned model built on top of a foundation model is also possible. If you fine-tune an open-source LLM on your customers' data and ship it as part of your product, you are the provider of that derived system. The obligations shift accordingly.

The Risk Pyramid: Where Most SaaS AI Features Actually Land

EU AI Act compliance starts with classifying your AI system into one of four risk tiers. Getting this classification right is the single most load-bearing decision you make, because the obligations attached to each tier are dramatically different.

Unacceptable risk systems are prohibited outright — social scoring by public authorities, real-time biometric identification in public spaces with narrow exceptions, emotion recognition in workplace and education contexts, cognitive behavioural manipulation. If your product does any of this, the conversation is not about compliance, it is about redesigning the feature.

High-risk systems are allowed but heavily regulated. Annex III of the Act lists the categories: AI used in critical infrastructure, education admissions, employment decisions (including CV screening and performance evaluation), access to essential private and public services, law enforcement, migration, administration of justice, and democratic processes. A surprising number of B2B SaaS products touch these areas — any HR tech product doing candidate ranking, any EdTech product gating access to programmes, any FinTech product doing creditworthiness assessment. High-risk classification triggers the bulk of the Act's obligations: risk management systems, data governance, technical documentation, human oversight requirements, accuracy and cybersecurity measures, post-market monitoring, and registration in the EU database.

Limited-risk systems are the common middle tier, and this is where most general-purpose AI features in SaaS products land: chatbots, summarisers, content generators, meeting assistants, voice features. The primary obligation here is transparency — users must know they are interacting with an AI system, AI-generated content must be marked as such in machine-readable form, and deepfakes must be labelled.

Minimal-risk systems face no specific obligations under the Act, only voluntary codes of conduct. Spam filters, inventory optimisation, and similar classical ML applications typically sit here.

The error most teams make is reflexively classifying their product as minimal-risk because "we just added a chatbot." A chatbot is limited-risk by default. A chatbot that gates access to a benefits application is high-risk because it is making consequential decisions about access to services. The context of use, not the underlying technology, is what determines the classification.

Transparency Obligations: The Baseline for Almost Every SaaS

For the limited-risk category — where most SaaS AI features actually sit — the core requirement is transparency toward end users. In practice this means four concrete things.

First, users interacting with an AI system must be informed they are interacting with one, unless this is already obvious from the context. A chat widget clearly labelled "AI Assistant" satisfies this. A humanlike voice agent that passes itself off as a human does not.

Second, AI-generated or AI-manipulated content — text, images, audio, or video — must be marked in a machine-readable format as artificially generated. This is the provision most teams miss. It is not satisfied by a visible disclaimer on the UI; it requires metadata, watermarks, or provenance signals that downstream systems can detect. C2PA and similar content provenance standards are the emerging technical answer.

Third, deepfakes — AI-generated or manipulated media resembling real people, objects, or events — must be clearly labelled as such. There are narrow exceptions for artistic, satirical, or clearly creative works.

Fourth, emotion recognition and biometric categorisation systems must inform natural persons exposed to their operation.

For a typical B2B SaaS shipping a GPT-backed feature, the minimum-viable transparency set is: a visible "AI" indicator in the UI near generated output, metadata on any generated artefacts that leave your system (documents, images, emails), and a page in your documentation explaining what the AI feature does and which model it uses.

Documentation: The Provider's Obligations You Probably Inherit

Even if you classify your feature as limited-risk, once you position yourself in the market as providing an AI-powered product, enterprise buyers will ask for technical documentation that effectively mirrors the provider obligations from the Act. You will want this documentation anyway — it is what unblocks enterprise sales.

The documentation set every AI-enabled SaaS should have ready by the end of 2026:

A system card for each AI feature: what the feature does, which model underpins it, the intended use cases, known limitations and failure modes, the data the model is given at inference time, and the data it is not given. One page per feature is often sufficient. This is what customers attach to their own AI Act records when they deploy your product.

A data processing and retention statement for AI interactions: whether user inputs are sent to a third-party model provider, whether they are retained by that provider, whether they are used for training (usually not, under enterprise API contracts, but verify), and your retention period on your own servers.

A human oversight description for any feature where the AI output affects a decision: how users can challenge, override, or correct AI output, and where the human decision point is in the workflow.

A risk and incident log documenting known issues with the AI system, mitigations applied, and incidents reported by users. For most SaaS this starts as an internal document in the same register as security incidents and evolves into something more formal as the product matures.

A model and vendor chain document: which GPAI model is used, from which provider, under which contractual terms, and what you rely on from the provider's own AI Act compliance (their model card, safety documentation, and copyright policy).

This documentation is not an afterthought. It is what your customers' procurement and legal teams will actually read. Treat it as a product artefact: version it, keep it current, and assign ownership. A code quality audit of an AI-integrated codebase should surface whether this documentation reflects what the code actually does — the two drifting apart is one of the most common and most dangerous gaps we see.

GPAI Rules: What Changes If You Fine-Tune

The Act's rules for general-purpose AI models (GPAI) apply to organisations that train or substantially modify foundation models. For most SaaS teams this is out of scope — you consume a GPAI model from a provider who handles those obligations.

The line gets blurry if you fine-tune an open-source LLM on your own data and ship the resulting model as part of your product. You may then be a modifier with downstream GPAI obligations: publishing a summary of training data, documenting capabilities and limitations, and implementing a copyright compliance policy. If fine-tuning meets the Act's thresholds for systemic-risk models (a very high computational bar, unlikely for mid-size SaaS), additional obligations kick in.

Even if you are not a GPAI provider, you need to know the GPAI compliance posture of the model provider you depend on. Anthropic, OpenAI, and Google publish AI Act-oriented transparency documentation; your contract should reference it. This is what separates "we use AI" from "we use AI in a way that holds up to a compliance audit."

A Practical Compliance Checklist

A concrete checklist a mid-size SaaS team can run against its AI features in an afternoon:

Classify every AI feature against the risk pyramid, and document the classification and the reasoning. If any feature might be high-risk (HR, credit, education, health, access-to-services), engage EU counsel before the next release.

For every limited-risk feature, confirm there is a visible AI disclosure, machine-readable marking on generated content, and deepfake labelling where applicable.

Ensure every AI feature has a one-page system card kept in version control alongside the code.

Verify your data flow: what user data leaves your infrastructure, which third-party model provider receives it, under what contractual terms, and whether it is used for training. Update your DPA and privacy policy accordingly.

Document the human oversight mechanism for each AI feature. "There is no human oversight" is an answer — decide whether that is the right answer for the feature's actual impact.

Keep a running register of AI incidents and model-provider advisories. Treat a model provider's safety update the same way you treat a CVE in a critical dependency.

For EU customers, make your AI system cards and data handling statements available on request. Many enterprise buyers will simply ask for them rather than run a custom questionnaire.

Align your legacy code optimization and modernisation work with these documentation requirements. AI features retrofitted into older codebases without this discipline are precisely where compliance gaps compound.

Closing

EU AI Act compliance is not a one-off checklist exercise. The regulation is phased through 2026 and 2027, and enterprise buyer expectations are tightening faster than the legal deadlines. Teams that treat AI features as shipped products — with system cards, transparency measures, human oversight, and a documentation spine — satisfy the law and move faster in enterprise sales cycles. The failure mode is the opposite: shipping AI features as vibe-coded prototypes, then scrambling to retrofit compliance when a deal stalls or a regulator writes.

Wolf-Tech audits AI-integrated SaaS products for EU AI Act readiness and helps Berlin and EU-based teams design documentation, oversight, and transparency layers before the next enterprise deal asks for them. Contact us at hello@wolf-tech.io or visit wolf-tech.io for a free consultation.