Default Stack Deployment

What a deployed kernel stack looks like, what each layer does, and what the buyer discovers.

VALIDATED - Technical integration: 48-72 hours · No model modification · No fine-tuning

What the Buyer Configures

Five parameters. No code changes. No model access required.

ParameterWhat It ControlsExample
IDNA DeclarationAgent identity: role, intent, method constraints, scope boundariesIDNA://ComplianceAnalyst::PreserveRegulatory∴FactualOnly⇒AuditReport
HALT ThresholdDrift tolerance (cosine distance)Banking: 0.70 · Customer service: 0.75 · Creative: 0.85
Output RegisterOutput abstraction depth, 6 levels (O0–O5)O0 (Compliance) · O2 (Assistant) · O3 (Brand Voice)
Tool WhitelistAuthorized external APIs. Everything else blocked and logged.Allow: internal-api.company.com · Block: all others
DDS ThresholdStructural coherence sensitivityBanking: 0.35 · Standard: 0.45 · Creative: 0.55

Deployment Sequence

  1. Declare IDNA — define agent identity, role, constraints
  2. Select output register — set abstraction depth (O0–O5)
  3. Set thresholds — HALT drift tolerance + DDS coherence sensitivity
  4. Configure tool whitelist — authorize permitted external APIs
  5. Run baseline tests — 10–20 prompts covering expected use cases
  6. Calibrate thresholds — adjust based on baseline results
  7. Enable audit export — connect logging to compliance pipeline
  8. Go live — shadow → selective → full production

Steps 1–4 = configuration. 5–6 = calibration. 7–8 = activation. No code changes required.

Runtime Processing Flow

Every input passes through the full kernel stack before delivery.

INPUT (message / prompt / agent action)
  ├→ IDNA.core ────── Encode intent header (~12 tokens)
  ├→ mOm4 ─────────── Lock canonical identity (write-once, SHA-256)
  ├→ mOm5 ─────────── Project trajectory (drift forecast)
  ├→ mOm6 ─────────── Coherence closure (HOW enforcement)
  ├→ OxygenProtocol ── Calibrate output register O0-O5
  └→ soul.exe ──────── Assemble validated loop snapshot
  │
  ├→ HALT ─────────── Binary drift gate (PASS/BLOCK)
  ├→ EchoMap ────────── Observe + score fidelity
  ├→ TrustAnchor ───── Verify signal (pass/review/reject)
  └→ DriftDefenseStack  Monitor loop health + auto-remediate
OUTPUT (governed, identity-verified, auditable)

Total overhead: P95 < 500ms isolated, median ~115ms. Full-stack: −18ms net (governance pays for itself via token reduction).

The Dual-Face Pattern

Every module has two faces. The buyer pays for governance. They discover performance.

ModuleGovernance Face (Buy)Performance Face (Discover)Stack
IDNAAgent identity declarationPrompt compression (~12 tokens)Core
mOm4Immutable identity recordO(1) memory recallCore
mOm5Drift forecastingTrajectory optimizationCore
mOm6HOW constraint enforcementResponse precision (+31% token efficiency)Core
OxygenProtocolOutput register governanceAbstraction depth controlCore
soul.exeLoop snapshot validationReasoning chain integrityCore
HALTBinary drift gatePre-delivery quality filterEnforcement
EchoMapFidelity scoringOutput consistency trackingEnforcement
TrustAnchorSignal verificationConfidence calibrationEnforcement
DriftDefenseStackLoop health monitoringAuto-remediation (self-healing)Enforcement
Full StackEU AI Act compliance infrastructure−18ms net latency, −31% tokensAll

Key insight: The buyer purchases governance infrastructure. They discover their AI gets faster, cheaper, and more precise. The dual-face pattern is why ArcKernel sells as compliance and retains as performance.

Deployment Profiles

🏦

Banking / Compliance

IDNA
ComplianceAnalyst
Register
O0
HALT
0.70
DDS
0.35
mOm6
FactualOnly
🎧

Customer Service

IDNA
ServiceAssistant
Register
O2
HALT
0.75
DDS
0.45
mOm6
EmpathicAccurate
🎨

Brand Voice

IDNA
BrandVoice
Register
O3
HALT
0.80
DDS
0.50
mOm6
NarrativeAligned

Audit Artifacts

Every deployment produces five artifact types, exportable to existing compliance pipelines.

ArtifactFormatContentsRetention
Event LogJSONEvery HALT decision: timestamp, drift score, pass/block, input hashConfigurable (default 90 days)
Evidence BundleJSON + SHA-256Full reasoning chain for blocked responses, tamper-evidentRetained until manual purge
Drift Trend SummaryJSON / CSVPer-agent drift trajectory over time, anomaly flagsRolling 12 months
Incident ReportStructured JSONAuto-generated on HALT block: context, severity, remediation takenPer regulatory requirement
Retention ControlsPolicy configPer-tenant retention rules, auto-purge schedules, GDPR complianceN/A (configuration)

Known Limitations

  • Threshold calibration is deployment-specific. Default thresholds (HALT 0.75, DDS 0.45) work for most use cases, but high-stakes deployments require baseline testing with domain-specific prompts.
  • Does not guarantee factual correctness. ArcKernel governs behavior and drift — it does not verify the truth of model outputs. Factual accuracy remains the model provider's responsibility.
  • Adversarial resilience is not absolute. HALT blocks 72% of adversarial attempts on Claude (best case). Sophisticated multi-turn attacks can still probe boundaries. Defense depth improves with DriftDefenseStack but is not 100%.
  • Firmware misconfiguration degrades governance. Incorrect IDNA declarations, overly permissive thresholds, or missing mOm6 constraints reduce enforcement quality. Deployment validation (Steps 5–6) catches most configuration errors.
  • Temperature validation scope. All published benchmarks run at temperature 0. Stochastic behavior at higher temperatures is under active testing — scheduled for Q2 2026 validation.

ArcKernel deploys in 48–72 hours. Five parameters configured. Nine modules activated. The buyer gets governance infrastructure, EU AI Act compliance mapping, and a reasoning upgrade they didn't know they were buying. See HALT, Drift Detection, IDNA, and the Glossary for full technical depth.