Default Stack Deployment
What a deployed kernel stack looks like, what each layer does, and what the buyer discovers.
What the Buyer Configures
Five parameters. No code changes. No model access required.
| Parameter | What It Controls | Example |
|---|---|---|
| IDNA Declaration | Agent identity: role, intent, method constraints, scope boundaries | IDNA://ComplianceAnalyst::PreserveRegulatory∴FactualOnly⇒AuditReport |
| HALT Threshold | Drift tolerance (cosine distance) | Banking: 0.70 · Customer service: 0.75 · Creative: 0.85 |
| Output Register | Output abstraction depth, 6 levels (O0–O5) | O0 (Compliance) · O2 (Assistant) · O3 (Brand Voice) |
| Tool Whitelist | Authorized external APIs. Everything else blocked and logged. | Allow: internal-api.company.com · Block: all others |
| DDS Threshold | Structural coherence sensitivity | Banking: 0.35 · Standard: 0.45 · Creative: 0.55 |
Deployment Sequence
- Declare IDNA — define agent identity, role, constraints
- Select output register — set abstraction depth (O0–O5)
- Set thresholds — HALT drift tolerance + DDS coherence sensitivity
- Configure tool whitelist — authorize permitted external APIs
- Run baseline tests — 10–20 prompts covering expected use cases
- Calibrate thresholds — adjust based on baseline results
- Enable audit export — connect logging to compliance pipeline
- Go live — shadow → selective → full production
Steps 1–4 = configuration. 5–6 = calibration. 7–8 = activation. No code changes required.
Runtime Processing Flow
Every input passes through the full kernel stack before delivery.
INPUT (message / prompt / agent action) ├→ IDNA.core ────── Encode intent header (~12 tokens) ├→ mOm4 ─────────── Lock canonical identity (write-once, SHA-256) ├→ mOm5 ─────────── Project trajectory (drift forecast) ├→ mOm6 ─────────── Coherence closure (HOW enforcement) ├→ OxygenProtocol ── Calibrate output register O0-O5 └→ soul.exe ──────── Assemble validated loop snapshot │ ├→ HALT ─────────── Binary drift gate (PASS/BLOCK) ├→ EchoMap ────────── Observe + score fidelity ├→ TrustAnchor ───── Verify signal (pass/review/reject) └→ DriftDefenseStack Monitor loop health + auto-remediate OUTPUT (governed, identity-verified, auditable)
Total overhead: P95 < 500ms isolated, median ~115ms. Full-stack: −18ms net (governance pays for itself via token reduction).
The Dual-Face Pattern
Every module has two faces. The buyer pays for governance. They discover performance.
| Module | Governance Face (Buy) | Performance Face (Discover) | Stack |
|---|---|---|---|
| IDNA | Agent identity declaration | Prompt compression (~12 tokens) | Core |
| mOm4 | Immutable identity record | O(1) memory recall | Core |
| mOm5 | Drift forecasting | Trajectory optimization | Core |
| mOm6 | HOW constraint enforcement | Response precision (+31% token efficiency) | Core |
| OxygenProtocol | Output register governance | Abstraction depth control | Core |
| soul.exe | Loop snapshot validation | Reasoning chain integrity | Core |
| HALT | Binary drift gate | Pre-delivery quality filter | Enforcement |
| EchoMap | Fidelity scoring | Output consistency tracking | Enforcement |
| TrustAnchor | Signal verification | Confidence calibration | Enforcement |
| DriftDefenseStack | Loop health monitoring | Auto-remediation (self-healing) | Enforcement |
| Full Stack | EU AI Act compliance infrastructure | −18ms net latency, −31% tokens | All |
Key insight: The buyer purchases governance infrastructure. They discover their AI gets faster, cheaper, and more precise. The dual-face pattern is why ArcKernel sells as compliance and retains as performance.
Deployment Profiles
Banking / Compliance
- IDNA
- ComplianceAnalyst
- Register
- O0
- HALT
- 0.70
- DDS
- 0.35
- mOm6
- FactualOnly
Customer Service
- IDNA
- ServiceAssistant
- Register
- O2
- HALT
- 0.75
- DDS
- 0.45
- mOm6
- EmpathicAccurate
Brand Voice
- IDNA
- BrandVoice
- Register
- O3
- HALT
- 0.80
- DDS
- 0.50
- mOm6
- NarrativeAligned
Audit Artifacts
Every deployment produces five artifact types, exportable to existing compliance pipelines.
| Artifact | Format | Contents | Retention |
|---|---|---|---|
| Event Log | JSON | Every HALT decision: timestamp, drift score, pass/block, input hash | Configurable (default 90 days) |
| Evidence Bundle | JSON + SHA-256 | Full reasoning chain for blocked responses, tamper-evident | Retained until manual purge |
| Drift Trend Summary | JSON / CSV | Per-agent drift trajectory over time, anomaly flags | Rolling 12 months |
| Incident Report | Structured JSON | Auto-generated on HALT block: context, severity, remediation taken | Per regulatory requirement |
| Retention Controls | Policy config | Per-tenant retention rules, auto-purge schedules, GDPR compliance | N/A (configuration) |
Known Limitations
- Threshold calibration is deployment-specific. Default thresholds (HALT 0.75, DDS 0.45) work for most use cases, but high-stakes deployments require baseline testing with domain-specific prompts.
- Does not guarantee factual correctness. ArcKernel governs behavior and drift — it does not verify the truth of model outputs. Factual accuracy remains the model provider's responsibility.
- Adversarial resilience is not absolute. HALT blocks 72% of adversarial attempts on Claude (best case). Sophisticated multi-turn attacks can still probe boundaries. Defense depth improves with DriftDefenseStack but is not 100%.
- Firmware misconfiguration degrades governance. Incorrect IDNA declarations, overly permissive thresholds, or missing mOm6 constraints reduce enforcement quality. Deployment validation (Steps 5–6) catches most configuration errors.
- Temperature validation scope. All published benchmarks run at temperature 0. Stochastic behavior at higher temperatures is under active testing — scheduled for Q2 2026 validation.
ArcKernel deploys in 48–72 hours. Five parameters configured. Nine modules activated. The buyer gets governance infrastructure, EU AI Act compliance mapping, and a reasoning upgrade they didn't know they were buying. See HALT, Drift Detection, IDNA, and the Glossary for full technical depth.