Services

Six
Practice
Areas.

We work exclusively in EU AI governance. Each engagement is practitioner-led, scoped to your specific regulatory exposure, and designed to produce documentation and frameworks that hold up under audit — not slide decks that gather dust.

01 / 06

EU AI Act
Readiness
Assessment

4–6 weeks

Know exactly where you stand before the regulator asks the same question.

Most organizations deploying AI in 2026 have no accurate picture of their regulatory exposure. They know they "use AI" — but they haven't mapped which systems trigger EU AI Act obligations, which fall under Annex III high-risk categories, and which are already prohibited under Article 5. This assessment closes that gap systematically, in writing, before enforcement begins.

Regulatory context

Full EU AI Act enforcement for high-risk systems begins August 2, 2026. Organizations that cannot demonstrate a documented risk classification process face penalties of up to €15M or 3% of global turnover for non-compliance with Annex III obligations — and up to €35M or 7% for prohibited practices.

AI System Inventory

Structured catalogue of every AI system your organization uses, deploys, or provides — including third-party tools, embedded models, and automated decision systems your teams may not recognize as AI.

Risk Classification

Classification of each system against the EU AI Act risk pyramid: prohibited (Article 5), high-risk (Annex III), limited-risk (transparency obligations), and minimal-risk. Includes GPAI model identification.

Obligations Mapping

For each high-risk system: a precise mapping of applicable obligations — technical documentation, conformity assessment, registration, human oversight, post-market monitoring, and incident reporting.

Gap Analysis Report

A frank written assessment of where you currently stand against each obligation. Gaps are prioritised by regulatory urgency, enforcement likelihood, and remediation effort.

GDPR / AI Act Intersection

Identification of where AI Act obligations overlap with existing GDPR requirements — particularly DPIAs, lawful basis, and automated decision-making under Article 22 — to avoid double compliance work.

Remediation Roadmap

A sequenced action plan with owner assignments, effort estimates, and regulatory deadlines. Designed to be handed directly to legal, technical, and compliance teams for execution.

1

Kick-off & scoping

Structured interviews with legal, IT, product, and operations leads to map the AI landscape. We use a structured intake questionnaire designed to surface systems that standard IT audits miss — including shadow AI tools and vendor-embedded models.

2

Documentation review

Review of existing technical documentation, vendor contracts, DPIAs, privacy notices, and any prior compliance assessments. We identify what already exists and what is missing under the AI Act's documentation requirements.

3

Classification workshop

A facilitated session with your legal and technical teams to validate risk classifications, resolve ambiguous cases, and build internal ownership of the risk register.

4

Report & roadmap delivery

Written gap analysis report and remediation roadmap, presented to senior stakeholders. Includes an executive summary suitable for board or audit committee reporting.

Who this is for

  • Organizations deploying AI in HR, credit, biometrics, or critical infrastructure
  • Legal and compliance teams facing board-level AI governance questions
  • Technology providers selling AI-enabled products into the EU market
  • Companies preparing for ISO 42001 certification
  • Organizations that have received AEPD or CNIL enquiries

What you receive

  • Written AI system inventory and risk classification register
  • Obligations mapping per system (Annex III, Article 5, transparency)
  • Prioritised gap analysis with regulatory urgency scoring
  • Sequenced remediation roadmap with owner assignments
  • Executive summary for board / audit committee
  • 30-day post-delivery Q&A support
Start with a readiness assessment

Engagements typically run 4–6 weeks. We can begin within 2 weeks of contract signature.

02 / 06

ISO 42001
Implementation
& Audit Prep

8–12 weeks

The most defensible compliance anchor you can build before August 2026.

ISO/IEC 42001 is the international standard for AI Management Systems — a structured framework for governing how your organization develops, deploys, and monitors AI. Unlike one-time compliance assessments, an AIMS creates a living system that demonstrates ongoing governance to regulators, clients, and auditors. It is also the closest thing available to a safe harbor under the EU AI Act, which explicitly references management system approaches in its conformity assessment pathways.

Why ISO 42001 matters now

The EU AI Act does not prescribe a single compliance method for high-risk systems — but it requires organizations to demonstrate systematic risk management, quality management, and post-market monitoring. ISO 42001 maps directly to these requirements. Organizations that arrive at a regulatory audit with a certified AIMS are in a fundamentally stronger position than those with ad hoc documentation.

Context & Scope Definition

Clause 4 analysis: defining your organization's AI context, identifying internal and external issues, mapping stakeholder expectations, and establishing the precise scope of the AIMS — what is in, what is out, and why.

AI Policy & Objectives

Drafting the AI policy (Clause 5.2), establishing measurable AI objectives (Clause 6.2), and ensuring leadership commitment is documented in a form that satisfies both ISO auditors and regulatory examiners.

Risk & Impact Assessment

Clause 6.1 risk methodology design — including AI-specific risk identification, impact assessment for fundamental rights (Annex D), and a documented treatment plan with residual risk acceptance criteria.

Annex A Controls Implementation

Gap assessment and implementation of the 38 controls in Annex A — covering responsible AI practices, data governance, system lifecycle, transparency, and human oversight. Includes Statement of Applicability.

AIMS Documentation Stack

Complete document set: AIMS manual, procedures, work instructions, forms, and records — structured to satisfy ISO 42001 Clause 7.5 and ready for external audit without remediation.

Internal Audit & Management Review

Design of the internal audit programme (Clause 9.2), audit checklists, nonconformity management process, and management review template — the operational backbone of a functioning AIMS.

1

Gap assessment against ISO 42001

Clause-by-clause assessment of your current state against the standard. Produces a prioritised implementation plan with effort estimates for each requirement.

2

AIMS design & documentation

We draft all required policies, procedures, and records. Documents are written for your organization — not generic templates with your logo. Each document references the specific clause it satisfies.

3

Controls implementation support

Working with your technical and operational teams to implement Annex A controls in practice — not just on paper. Includes evidence collection guidance for each control.

4

Pre-certification audit simulation

A full internal audit simulating the Stage 2 certification audit. Identifies nonconformities before your external auditor does, with corrective action plans for each finding.

5

Certification body liaison

We support Stage 1 and Stage 2 audits alongside your team — available to respond to auditor queries, clarify documentation, and manage the corrective action process through to certification.

Who this is for

  • Organizations seeking ISO 42001 certification as a competitive differentiator
  • Companies preparing for EU AI Act conformity assessment
  • Organizations whose clients or procurement processes are requiring AI governance certification
  • EcoVadis Platinum-rated companies extending their compliance posture to AI

What you receive

  • Complete AIMS documentation set (policy, procedures, records)
  • Statement of Applicability for all 38 Annex A controls
  • Risk register and impact assessment methodology
  • Internal audit programme and checklists
  • Pre-certification audit report and corrective action plan
  • Certification body liaison support
Discuss ISO 42001 implementation

We work with your preferred certification body or can recommend PECB, BSI, or TÜV auditors active in Spain and France.

03 / 06

High-Risk
AI Governance
Advisory

Retainer

Ongoing governance for the systems regulators are watching most closely.

Annex III of the EU AI Act identifies eight categories of high-risk AI — systems where the potential for harm to fundamental rights is significant enough to require full regulatory compliance. These include biometric identification, emotion recognition, HR and recruitment AI, credit scoring, critical infrastructure management, law enforcement tools, and AI used in education and justice. If your organization operates in any of these areas, a one-time assessment is not enough. You need continuous governance.

Annex III high-risk categories — are you exposed?

Biometric identification · Remote biometric categorisation · Emotion recognition · CV screening and candidate ranking · Creditworthiness assessment · Benefits entitlement decisions · Critical infrastructure management · Law enforcement risk assessment · Any of these in your stack triggers full Annex III obligations from August 2026.

Technical Documentation

Drafting and maintaining Article 11 technical documentation for each high-risk system — the detailed record regulators and notified bodies examine first. Includes intended purpose, performance metrics, training data description, and architecture documentation.

Human Oversight Design

Designing and documenting the human oversight measures required under Article 14 — identifying who has oversight responsibility, what they can see, what interventions they can make, and how override decisions are recorded.

DPIA Integration

Coordinating the AI Act's fundamental rights impact assessment with your existing GDPR DPIA process. Eliminates duplication and ensures both assessments satisfy their respective regulators.

Post-Market Monitoring

Design of the post-market monitoring system required under Article 72 — defining what data is collected, at what frequency, who reviews it, and what thresholds trigger escalation or incident reporting.

Incident Reporting Protocols

Procedures for identifying serious incidents and malfunctions under Article 73, documenting the reporting chain, and managing notifications to national market surveillance authorities within required timeframes.

Regulatory Change Monitoring

Monthly briefing on regulatory developments affecting your specific systems — AEPD and CNIL enforcement actions, EU AI Office guidance updates, EDPB opinions, and sector-specific delegated acts as they are published.

Typical retainer scope

  • Monthly governance review call (90 minutes)
  • Quarterly technical documentation update
  • Ad hoc regulatory query response (within 48 hours)
  • Annual conformity assessment support
  • Regulatory alert service for your specific system categories

Specialist areas

  • Facial recognition and biometric AI systems
  • Emotion AI and affective computing
  • HR, recruitment, and workforce analytics AI
  • Credit and insurance AI systems
  • AI in healthcare and clinical decision support
Discuss a governance retainer

Retainers are structured around your specific systems and risk profile. Minimum 6-month engagement.

04 / 06

AEPD &
CNIL
Navigation

Project-based

The same regulation. Two very different enforcement realities.

France and Spain are home to two of the EU's most active data protection and AI enforcement authorities. The CNIL issued €486M in fines in 2025 — a 9× increase year on year — and has formally positioned itself as France's AI Act market surveillance authority. Spain's AEPD fined La Liga €1M and Osasuna €200K in a single month for unauthorized biometric AI systems, and issued landmark guidance on agentic AI in early 2026. If you operate in either market, understanding how each authority actually behaves — not just what the regulation says — is a material operational advantage.

The enforcement gap you need to understand

CNIL has entered a capital-intensive deterrence phase — fewer cases, larger fines, targeting tech companies and high-volume data processors. AEPD enforcement is real and active, but authority is currently fragmented across AEPD, AESIA, the Bank of Spain, and sector regulators. Knowing which regulator applies to your specific system is as important as knowing what the rules say.

Jurisdiction Mapping

Identifying which authority has jurisdiction over each of your AI systems in Spain and France — including the fragmented AEPD/AESIA/sector regulator structure in Spain and CNIL's expanding AI Act remit in France.

Enforcement Posture Analysis

Translating each authority's documented enforcement priorities, inspection focus areas, and published sanction decisions into concrete operational requirements for your specific systems and sector.

AEPD-Specific Compliance

Spain requires a DPIA for any AI system processing personal data — more stringent than GDPR's baseline. We build AEPD-compliant documentation stacks including the AI inventory, DPIAs, and Spanish-specific data protection obligations under LOPDGDD.

CNIL Guidance Implementation

Translating CNIL's 12 AI guidance sheets and its 2025–2028 strategic plan into practical compliance measures — including legitimate interest assessments, data annotation standards, and model compliance documentation.

Regulatory Correspondence Support

Drafting responses to AEPD or CNIL information requests, preparing documentation packages for inspections, and coordinating with your legal counsel during active regulatory procedures.

Cross-Border Compliance Architecture

For organizations operating in both markets: a unified compliance architecture that satisfies both authorities without duplicating effort — covering documentation standards, oversight procedures, and incident reporting.

Who this is for

  • Organizations that have received AEPD or CNIL correspondence
  • Companies deploying biometric or emotion AI in Spain or France
  • Tech companies selling AI products into both markets
  • Organizations preparing for proactive regulatory engagement

Languages of delivery

  • All documentation delivered in English
  • Spanish-language versions for AEPD submissions
  • French-language versions for CNIL submissions
  • Trilingual regulatory correspondence support
Discuss your regulatory situation

Engagements are scoped to your specific jurisdiction exposure. We respond to urgent regulatory queries within 48 hours.

05 / 06

AI Ethics
Board &
Governance

Workshop format

Internal governance that functions — not a committee that meets once a year.

The EU AI Act requires organizations deploying high-risk AI to implement human oversight measures, risk management systems, and quality management processes. In practice, these obligations require someone inside your organization to own AI governance — to make decisions about system deployment, monitor for issues, and escalate when something goes wrong. An AI ethics board or governance committee, properly designed, is the structural answer to that requirement. Improperly designed, it is a liability that creates the appearance of governance without the substance.

What the EU AI Act actually requires

Article 9 mandates a risk management system that is continuous, iterative, and documented. Article 14 requires human oversight measures that enable designated persons to monitor, intervene, and override. Article 17 requires a quality management system covering roles, responsibilities, and procedures. A properly constituted AI governance board satisfies all three obligations simultaneously.

Governance Architecture Design

Designing the right structure for your organization — standalone AI ethics committee, sub-committee of the audit board, or integrated into an existing risk governance framework — with clear rationale for regulatory defensibility.

Terms of Reference

Drafting the committee's founding document: mandate, scope, membership criteria, quorum rules, decision-making authority, escalation thresholds, and relationship to executive leadership and the board.

Role Design & Accountabilities

Defining specific roles within the governance structure — AI Officer, technical reviewer, legal/compliance representative, affected stakeholder voice — with clear accountability maps aligned to EU AI Act Article 26 obligations for deployers.

Decision Frameworks

Practical frameworks for the decisions the committee actually needs to make: AI system deployment approval, risk acceptance, incident escalation, override authorization, and third-party AI vendor assessment.

Meeting Cadence & Agenda Templates

Recommended meeting frequency, standing agenda structure, pre-read requirements, and minute-taking standards — designed to generate the documented evidence of governance activity that regulators look for.

First-Year Facilitation

Optional: facilitation of the committee's first three to four meetings, ensuring the governance process becomes operational rather than remaining a document. Includes post-meeting coaching for the chair.

1

Governance diagnostic (half day)

Assessment of existing governance structures, decision-making culture, and regulatory obligations specific to your organization. Identifies the right governance model before any design work begins.

2

Design workshop (full day)

Facilitated session with senior stakeholders to co-design the governance structure, resolve tensions between legal, technical, and business requirements, and build internal alignment before documentation begins.

3

Documentation delivery (2 weeks)

Terms of reference, role descriptions, decision frameworks, and all supporting documentation — drafted, reviewed with your team, and finalized for adoption.

Design your AI governance structure

Delivered in-person in Spain or France, or remotely. Typical engagement: 3–4 weeks from kick-off to documentation delivery.

06 / 06

AI Literacy
Training

In-person / Remote

Article 4 is not optional — and "awareness" is not the same as literacy.

Article 4 of the EU AI Act requires providers and deployers of AI systems to ensure that their staff have sufficient AI literacy — the knowledge and skills to understand the capabilities and limitations of the AI systems they work with, and to use them responsibly. This obligation has been in force since February 2025. Most organizations have not yet documented how they are meeting it. We design and deliver training programmes that satisfy the legal requirement and generate the evidence of compliance that regulators will ask for.

Article 4 — what it actually requires

Organizations must take measures to ensure a "sufficient level of AI literacy" among staff who use or oversee AI systems — proportionate to the context, the risk level of the systems, and the role of the individual. Generic awareness training does not meet this standard for staff operating Annex III high-risk systems. Documented, role-specific training with evidence of completion is the minimum defensible position.

AI Literacy Foundations

Half-day programme for non-technical staff: what AI systems are, how they make decisions, what can go wrong, and what EU AI Act obligations mean in practice for employees who use AI tools in their daily work.

High-Risk AI Operations

Full-day programme for staff operating or overseeing Annex III systems. Covers technical documentation obligations, human oversight responsibilities, incident identification, and escalation procedures — documented to satisfy Article 14 oversight requirements.

Legal & Compliance Track

Half-day programme for legal, compliance, and risk teams: EU AI Act structure, enforcement landscape, AEPD and CNIL postures, ISO 42001 overview, and how to build a defensible compliance programme.

Technical Teams Track

Full-day programme for ML engineers, data scientists, and product teams: AI Act obligations for providers, technical documentation requirements, conformity assessment pathways, and responsible AI development practices.

Board & Executive Briefing

Two-hour facilitated session for board members and C-suite: regulatory landscape, enforcement risk, governance obligations, and the questions every board should be asking about AI in their organization.

Evidence Package

For every training delivered: attendance records, learning outcomes documentation, assessment results (where applicable), and a compliance certificate — formatted as evidence for AEPD, CNIL, or ISO 42001 audit purposes.

Delivery formats

  • In-person at your premises (Spain or France)
  • Remote via video conference
  • Hybrid formats for distributed teams
  • English, French, or Spanish delivery
  • Materials localized for each jurisdiction

What every programme includes

  • Pre-training needs assessment
  • Customized content for your specific AI systems
  • Participant workbook and reference materials
  • Post-training assessment
  • Full compliance evidence package
  • 12-month refresh notification
Discuss your training requirements

We design programmes around your specific AI systems, team structure, and regulatory obligations — not off-the-shelf content.

Not sure
where to start?

Most engagements begin with a readiness assessment — a clear picture of your exposure before we discuss anything else. No commitment required beyond the initial conversation.

Request a free initial consultation