The Transition

AI inference is moving from experimental systems to regulated infrastructure. This transition demands a shift from “trust me” narratives to cryptographic proof.

Then: Non-Deterministic Systems

Traditional inference runtimes treat AI as experimental. Inputs produce outputs, but the internal state remains opaque:

This approach works for research demonstrations. It fails in production environments where reliability and accountability are mandated.

Now: Deterministic Runtimes

Deterministic inference replaces probabilistic behavior with strict guarantees:

The Engineering Shift

Determinism requires runtime-level enforcement, not just model-level claims:

Canonical Serialization

Inputs must serialize to identical byte sequences. No whitespace variance. No key-order ambiguity.

Fixed-Point Arithmetic

Floating-point drift breaks reproducibility. Q15-style fixed-point reduces numeric variance to acceptable bounds.

Seed Derivation

HKDF-SHA256 derives execution seeds from input digests. Same input → same seed → same output.

Receipt Generation

BLAKE3 hashes bind inputs, outputs, and routing decisions into tamper-evident execution receipts.

The Evidence Requirement

Regulated deployments demand proof, not promises:

Deterministic runtimes provide this evidence. Non-deterministic systems do not.

AdapterOS

AdapterOS implements these principles as a production runtime:

Patent application filed. Under review. Not an issued patent.

References


This is a canonical research note. For an interactive visualization, see ai.jkca.me/then-now.