EU AI Act Compliance Mapping
How ArcKernel addresses the specific requirements of the EU Artificial Intelligence Act for agentic systems.
Deadline: August 2, 2026
High-risk AI systems under Annex III must be fully compliant. Agentic systems operating in employment, finance, healthcare, or critical infrastructure fall into this category.
The Agentic Compliance Gap
The EU AI Act was drafted between 2021-2023 — before autonomous agents existed at scale. It regulates "systems" (static artifacts), but agents are "processes" (dynamic flows).
This creates friction points:
- Article 14 requires human oversight, but agents execute thousands of decisions per hour
- Article 12 requires traceability, but agents generate their own reasoning chains
- Article 15 requires robustness, but agents suffer context amnesia over long sessions
ArcKernel bridges this gap with runtime governance — enforcing compliance during execution, not just at deployment.
Component Mapping — Default Stack
| Module | EU AI Act Article | Regulatory Requirement | What ArcKernel Does |
|---|---|---|---|
| HALT Protocol | Article 9, 14 | Risk management; ability to interrupt | Real-time drift detection via cosine distance. Binary PASS/BLOCK gate. Pre-action enforcement — blocks before output. |
| IDNA Protocol | Article 13, 14 | Transparency; human override | Machine-readable + human-readable intent declaration. Users query: "Why did the agent do X?" IDNA provides the answer. |
| mOm4 | Article 15 | Resist errors, faults, adversarial attacks | Write-once immutable identity baseline. Cannot be overwritten by adversarial prompts. SHA-256 verified. 100% integrity (1,857 checks). |
| mOm5 | Article 15 | Predictive risk management | Drift trajectory forecasting — predicts behavioral breach before it manifests. AUPRC 0.54 vs 0.37 baseline. |
| mOm6 | Article 9 | Method compliance monitoring | HOW enforcement — ensures correct reasoning methodology. Finance 99.2%, Legal 97.6%. 75 violations HALT alone missed. |
| TrustAnchor | Article 12 | Per-output fidelity scoring | Three-dimension compliance scoring. 201 events where HALT passed but TrustAnchor flagged. |
| DriftDefenseStack | Article 14 | Continuous monitoring | Structural degradation defense. 57 events HALT missed. Independent signal (r=0.40). |
Component Mapping — Extended Stack
| Module | EU AI Act Article | Regulatory Requirement | What ArcKernel Does |
|---|---|---|---|
| OxygenProtocol | Article 14 | Context-appropriate output | Six output registers (O0–O5). Compliance locked to O0/O1. Wrong register triggers drift flag. Strongest marginal signal (d=0.32). |
| EchoMap | Article 12, 13 | Traceability + transparency over time | Trust ledger. Per-action fidelity scores, drift flags, loop closure reports. Exportable compliance evidence. |
| MirrorLock | Article 12, 14 | Immutable audit trail | Tamper-proof per-action record: IDNA state, drift score, decision vector, tool authorization, kernel witness hash. |
| soul.exe | Article 12, 15 | Full-stack audit integrity; robustness | Orchestration loop. 867/867 snapshot integrity. Zero interaction failures (0/192). Order-independent (delta 0.0007). |
| Tool Whitelisting | Article 10 | Data governance | Kernel-level configuration. Controls which tools the agent can access and what data it can send. Every API call validated. |
Article-by-Article Breakdown
Article 9: Risk Management
Requirement: Establish a risk management system that operates throughout the lifecycle of the AI system.
The Problem: Traditional risk management is periodic — quarterly reviews, annual audits. Agents generate thousands of decisions per hour. Risk accumulates between reviews.
ArcKernel Solution: Continuous runtime risk monitoring. HALT evaluates every output against declared identity. mOm6 monitors method compliance. DriftDefenseStack detects structural degradation independent of content-level drift. Risk is measured and acted on per inference cycle, not per audit cycle.
Empirical: 68.5% scope creep reduction, 99.2% method compliance (Finance), 57 degradation events caught that content-level drift detection alone missed.
Article 12: Record-Keeping
Requirement: Automatic recording of events to ensure traceability of system functioning.
The Problem: In agentic systems, a single user request may trigger 50+ reasoning steps and 20+ tool calls. Logging only input/output is legally insufficient.
ArcKernel Solution: MirrorLock creates an immutable audit trail that captures:
- The IDNA state at time of action
- The drift score (behavioral deviation metric)
- The decision vector (why this action was chosen)
- Tool calls and their authorization status
// Example audit entry
{
"timestamp": "2026-01-29T14:32:01Z",
"idna": "IDNA://FinancialAdvisor::PreserveCapital∴RiskAverse⇒AssetAllocation",
"action": "portfolio_rebalance",
"drift_score": 0.12,
"authorized": true,
"witness": "kernel_v1.2.3"
}Article 13: Transparency
Requirement: Systems must be sufficiently transparent for users to interpret outputs.
The Problem: Deep learning models are "black boxes." Agents compound this by exhibiting emergent behavior — combining benign instructions into unexpected outcomes.
ArcKernel Solution: IDNA (Intent DNA) provides a machine-readable and human-readable declaration of intent:
IDNA://[ROLE]::[WHY]∴[HOW]⇒[WHAT]- ROLE: What the agent is (e.g.,
FinancialAdvisor) - WHY: Core intent being protected (e.g.,
PreserveCapital) - HOW: Method constraints (e.g.,
RiskAverse) - WHAT: Scope boundaries (e.g.,
AssetAllocation)
Users can query: "Why did the agent do X?" The IDNA provides the answer.
Article 14: Human Oversight
Requirement: High-risk systems must be "effectively overseen by natural persons" who can interrupt or override.
The Problem: Agents operate at machine speed. A human cannot review 10,000 micro-decisions before they execute. By the time you see the log, the funds have transferred.
ArcKernel Solution: Shift from transactional oversight to architectural oversight.
- Humans define the rules — boundaries, thresholds, prohibited actions
- HALT Protocol enforces at runtime — if drift score exceeds threshold, execution stops
- Escalation to human — high-risk deviations trigger human review before proceeding
The human oversees the rules, not the runs.
// HALT logic
if (drift_score > threshold) {
HALT();
escalate_to_human(action, context, drift_score);
}Article 15: Robustness & Security
Requirement: Systems must resist errors, faults, and adversarial attacks including data poisoning.
The Problem: Agents suffer from two critical vulnerabilities:
- Context Amnesia: Safety instructions "fade" as context windows fill up (the Quadratic Wall)
- Prompt Injection: Adversaries embed malicious instructions in untrusted inputs
ArcKernel Solution:
- O(1) Symbolic Memory: Agent state lives outside the context window in a 3–8KB kernel. Safety rules never fade. Governance overhead is bounded and does not grow with conversation length.
- mOm4 (Immutable Baseline): Write-once canonical identity. Cannot be overwritten by adversarial prompts. 100% integrity across 1,857 SHA-256 checks.
- mOm5 (Drift Forecaster): Predicts behavioral drift before it manifests (AUPRC 0.54 vs 0.37 baseline).
- soul.exe (Orchestration Robustness): All 9 modules operate independently (d=0.00), in any execution order (delta 0.0007), with zero interaction failures (0/192).
- Tool Whitelisting: Kernel validates every external call against authorized list.
Empirical: 72% adversarial resilience across 447 attacks (Claude). 100% block rate across 5 defined authority-escalation variants. Governance-layer determinism: zero variance across 1,800 checks.
Article 10: Data Governance
Requirement: Control over training, validation, and testing data to minimize errors and bias.
The Problem: Agents ingest data dynamically at runtime via tool calls and APIs. This "runtime data" wasn't part of training and hasn't been vetted.
ArcKernel Solution: Agentic Tool Sovereignty — the organization controls which tools the agent can access and what data it can send.
- Kernel maintains authorized tool whitelist
- Every API call checked against data sovereignty rules
- Blocks calls to non-compliant jurisdictions or unauthorized endpoints
Penalty Context
| Violation Type | Maximum Fine |
|---|---|
| Prohibited practices (Article 5) | €35 million or 7% of global turnover |
| High-risk system obligations (Articles 16-29) | €15 million or 3% of global turnover |
| Incorrect information to authorities | €7.5 million or 1.5% of global turnover |
Fines can be stacked with GDPR penalties (up to 4% of turnover) if non-compliance involves personal data processing.
Further Reading
- Full EU AI Act Whitepaper — 25-page technical and legal analysis
- IDNA Protocol Specification — How symbolic identity works
- HALT Protocol — Runtime enforcement mechanics
- Quadratic Wall Explainer — Why transformer memory creates compliance risk