Compliance Readiness Checklist
Five questions to assess whether your agentic AI systems are EU AI Act compliant.
How to use this checklist
If the answer to any question is "No," your system is likely non-compliant under the EU AI Act. Use the linked resources to understand the gap and implementation path.
The Five Questions
1. Identity Persistence
Does your agent have a definition of "self" that persists beyond the context window?
| Answer | Implication |
|---|---|
| ✅ Yes | Agent maintains consistent behavior regardless of session length |
| ❌ No | Agent may "forget" safety rules in long sessions (Article 15 violation) |
What to check:
- Can the agent operate for 2+ hours without behavioral drift?
- If you inject safety instructions at the start, are they still active after 50+ exchanges?
- Does session length correlate with compliance incidents?
ArcKernel solution: O(1) Symbolic Memory
2. Runtime Interruption
Can you stop the agent mid-thought or pre-action if it detects drift?
| Answer | Implication |
|---|---|
| ✅ Yes | System can prevent non-compliant actions before execution |
| ❌ No | You can only detect violations after damage is done (Article 14 violation) |
What to check:
- Is there a mechanism to halt execution before tool calls complete?
- Does your system detect drift in real-time or only in post-hoc review?
- Can high-risk actions be automatically escalated to humans?
Note: Post-hoc moderation (reviewing logs after execution) does not satisfy Article 14.
ArcKernel solution: HALT Protocol
3. Intent Logging
Do your logs capture the Why and How of decisions, or just the chat text?
| Answer | Implication |
|---|---|
| ✅ Yes | You can explain and defend any agent action in an audit |
| ❌ No | Logs are legally insufficient for traceability (Article 12 violation) |
What to check:
- Can you answer "Why did the agent choose Tool A over Tool B?"
- Do logs capture the reasoning chain, not just input/output?
- Is there a tamper-proof record linking actions to intent?
ArcKernel solution: MirrorLock Audit
4. Tool Sovereignty
If the agent can call external APIs, do you have a governance layer controlling which tools it can use?
| Answer | Implication |
|---|---|
| ✅ Yes | Organization maintains control over data flows |
| ❌ No | Agent may access non-compliant services or exfiltrate data (Article 10 violation) |
What to check:
- Can the agent autonomously discover and call new APIs?
- Is there a whitelist of authorized tools and endpoints?
- Do you control what data the agent can send to external services?
ArcKernel solution: Tool whitelisting via kernel configuration
5. Human Proxy
What automated system acts as the proxy for human oversight at machine speed?
| Answer | Implication |
|---|---|
| ✅ Defined | Humans oversee rules; system enforces at runtime |
| ❌ Undefined | No effective oversight mechanism exists (Article 14 violation) |
What to check:
- Who defines the boundaries the agent must operate within?
- How are those boundaries enforced during execution (not just at deployment)?
- What happens when the agent approaches a boundary?
ArcKernel solution: IDNA + HALT Protocol form the human proxy layer
Scoring
| Score | Assessment |
|---|---|
| 5/5 ✅ | System architecture supports compliance |
| 3-4/5 | Gaps exist — prioritize remediation before August 2026 |
| 0-2/5 | Significant re-architecture required |
Next Steps
If you scored 3/5 or below: