Contact
AI

Compliancevirkeligheden ved at udrule AI i regulerede brancher

Empirium Team10 min read

Your AI demo impressed the board. Then legal asked three questions: "Where does the data go? Can we explain the outputs? What happens when it is wrong?" The project stalled for six months.

Deploying AI in finance, healthcare, legal, and insurance is not a technical problem — it is a compliance problem. The model works. The regulations are what kills projects. Here is the compliance landscape and the architectural patterns that navigate it, based on our work at Empirium with clients in regulated sectors.

The Regulatory Landscape for AI

EU AI Act (Effective 2026)

The EU AI Act classifies AI systems by risk level and imposes requirements proportional to that risk. For B2B operators:

  • Prohibited: Social scoring, real-time biometric surveillance, manipulative AI
  • High-risk: AI used in employment decisions, credit scoring, insurance underwriting, legal proceedings, critical infrastructure
  • Limited risk: Chatbots (must disclose AI nature), emotion recognition, deepfakes
  • Minimal risk: Spam filters, recommendation engines, content tools — no specific requirements

High-risk systems require: conformity assessments, technical documentation, human oversight, accuracy and robustness testing, data governance, and post-market monitoring.

Industry-Specific Regulations

Industry Key Regulations AI-Relevant Requirements
Finance PCI-DSS, SOC2, MiFID II, Dodd-Frank Data encryption, audit trails, explainability for credit decisions
Healthcare HIPAA, GDPR (health data), MDR Data anonymization, patient consent, clinical validation
Legal Bar association rules, GDPR Confidentiality, unauthorized practice of law concerns
Insurance Solvency II, state insurance regulations Actuarial soundness, non-discrimination, rate filing
Government FedRAMP, FISMA, state procurement rules Data sovereignty, security clearance, vendor vetting

SOC2 and AI

SOC2 does not mention AI specifically, but its five trust criteria (security, availability, processing integrity, confidentiality, privacy) all apply to AI systems:

  • Security: How is the model API connection secured? Are API keys rotated? Is access logged?
  • Processing integrity: How do you verify the model's outputs are accurate? What is the error rate?
  • Confidentiality: Does customer data leave your infrastructure? Is it used for model training?
  • Privacy: How is PII handled in prompts? Is it redacted before sending to the model?

Risk Classification

Not all AI deployments carry the same risk. A content suggestion tool is different from a credit scoring model.

High-Risk AI Applications

Application Risk Level Why
Credit scoring / lending decisions Critical Directly affects financial outcomes, discrimination risk
Insurance underwriting Critical Coverage denial, pricing discrimination
Medical diagnosis assistance Critical Patient safety, liability
Employment screening High Discrimination, regulatory scrutiny
Legal document analysis High Unauthorized practice of law risk
Customer support (financial) Medium May disclose account information, privacy risk
Content generation (marketing) Low Minimal regulatory exposure

The Risk Assessment Framework

For each AI application, evaluate:

  1. Impact of wrong output: If the AI is wrong, what happens? Inconvenience? Financial loss? Physical harm?
  2. Reversibility: Can a wrong decision be easily reversed? A misclassified email is reversible. A denied loan application has lasting consequences.
  3. Protected class involvement: Does the decision affect people differently based on protected characteristics?
  4. Data sensitivity: What data does the AI process? Public information? PII? Health records? Financial data?
  5. Autonomy level: Does a human review the AI's output before it takes effect?

Score each factor 1-5. Total score determines the compliance investment needed:

  • 5-10: Minimal compliance requirements. Standard security practices.
  • 11-17: Moderate requirements. Audit trails, documentation, periodic review.
  • 18-25: High requirements. Full compliance framework, external audits, continuous monitoring.

Compliance Requirements

Data Handling

The most common compliance killer: customer data sent to a third-party model provider.

The problem: When you send customer data to OpenAI or Anthropic's API, that data crosses your security boundary. Even with contractual guarantees that data is not used for training, the data traversal itself may violate data residency requirements.

Solutions by compliance level:

Requirement Solution Cost Impact
Basic (SOC2) Use API with DPA, data not used for training Minimal
Moderate (GDPR) EU-hosted endpoints (Azure OpenAI EU, Anthropic EU) +10-20%
High (HIPAA) BAA with provider, PHI redaction before API calls +20-40%
Maximum (FedRAMP) Self-hosted models on authorized infrastructure +200-500%

PII Redaction

For moderate to high compliance requirements, redact PII before it reaches the model:

function redactPII(text: string): { redacted: string; mapping: Map<string, string> } {
  const mapping = new Map();
  let redacted = text;
  
  // Replace names, emails, phone numbers, SSNs, account numbers
  redacted = redacted.replace(/\b[A-Z][a-z]+ [A-Z][a-z]+\b/g, (match) => {
    const token = `[NAME_${mapping.size}]`;
    mapping.set(token, match);
    return token;
  });
  // ... additional PII patterns
  
  return { redacted, mapping };
}

// After model response, re-hydrate if needed
function rehydrate(response: string, mapping: Map<string, string>): string {
  let result = response;
  for (const [token, value] of mapping) {
    result = result.replace(token, value);
  }
  return result;
}

The model never sees real PII. The mapping stays in your infrastructure.

Audit Trails

Every AI decision in a regulated context needs:

  • Input: What data was provided to the model (redacted version)
  • Output: What the model returned
  • Decision: What action was taken based on the output
  • Timestamp: When the decision was made
  • Model version: Which model and prompt version produced the output
  • Human review: Whether a human reviewed and approved the output

Store audit logs for the retention period required by your regulations (typically 5-7 years for financial, 6 years for HIPAA).

Model Documentation (Model Cards)

High-risk AI systems require documentation covering:

  • Model purpose and intended use
  • Training data description (for fine-tuned models)
  • Known limitations and failure modes
  • Performance metrics on relevant evaluation datasets
  • Bias testing results
  • Update and maintenance schedule

Bias Testing

For AI systems that affect people (credit, hiring, insurance), you must test for discriminatory outcomes:

  • Run the model on test cases representing different demographic groups
  • Measure outcome differences across groups
  • Document any disparities and the mitigation steps taken
  • Re-test after every model update

Implementation Patterns

The Compliance-Ready Architecture

User Input → PII Redactor → Sanitized Input
    → Audit Logger (input logged)
        → Model API (via compliant endpoint)
            → Output Validator
                → Audit Logger (output logged)
                    → Human Review Queue (if high-risk)
                        → PII Rehydration → User Response

Every step is logged. PII never leaves your infrastructure. High-risk decisions require human approval. The entire chain is auditable.

On-Premise Deployment

For maximum compliance, deploy models on your own infrastructure:

  • Hardware: NVIDIA A100/H100 GPUs ($2,000-$5,000/month for cloud instances)
  • Software: vLLM or TGI for inference, open-weight models (Llama 3.1, Mistral)
  • Trade-off: Full data control, but lower model quality than commercial APIs and significant operational overhead

Hybrid Approach

Use commercial APIs for non-sensitive tasks and self-hosted models for sensitive ones. A support agent might use Claude for general queries but route financial account questions to a self-hosted model that never sends data externally.

FAQ

Do I need a compliance certification for AI? There is no "AI certification" yet. You need to comply with existing regulations that apply to your industry. The EU AI Act will require conformity assessments for high-risk systems starting in 2026. ISO 42001 (AI management systems) provides a voluntary framework.

Who is liable when AI makes a wrong decision? The deployer, not the model provider. OpenAI's terms of service explicitly disclaim liability for outputs. If your AI system denies a loan application incorrectly, your organization is liable, not the model provider. This is why human oversight is essential for high-risk decisions.

How much does compliance infrastructure add to AI project costs? For basic compliance (SOC2-level): 10-20% cost increase. For moderate compliance (HIPAA/GDPR): 30-50%. For maximum compliance (FedRAMP/on-premise): 200-500%. The cost scales with the sensitivity of the data and the autonomy of the AI system. See our AI cost analysis.

Can I use ChatGPT/Claude for regulated workflows? The consumer versions (ChatGPT, Claude.ai) — no. The API versions with appropriate DPAs and enterprise agreements — yes, for most moderate-compliance use cases. For high-compliance requirements, use dedicated endpoints (Azure OpenAI, AWS Bedrock) or self-hosted models.

Compliance is not optional, and it is not an afterthought. Build it into your AI architecture from day one. We help regulated businesses deploy AI compliantly.

Written by Empirium Team

Explore More

Deep-dive into related topics across our five pillars.

Pillar Guide

Stemme-AI-agenter til salg: En realistisk implementeringsguide

En produktionsfokuseret guide til udrulning af stemme-AI-agenter.

View all AI articles

Related Resources

Need help with this?

Talk to Empirium