Temporal infrastructure for causal AI

Traditional AI stacks bolt together 8–12 tools and hope embeddings cluster. TNT provides a single temporal substrate where agents, data, decisions, and causality coexist natively.

Sophisticated ingestion · Simple interrogation · Deterministic replay

The foundation everything else builds on.

Four epistemological layers

Each layer has a distinct epistemological status. Layers 0–2 are deterministic and auditable. Layer 3 is creative and interpretive. The architecture enforces this separation, so you always know whether you're looking at fact or hypothesis.

Layer 0 Temporal

Temporal Substrate

Time as a foundational axis, not metadata. Point-in-time state queries are primitive operations. Everything else is a function of time, not a spatial graph with timestamps attached.

Studios Channels Realities TimeSeries QuantumFoam Branching
Layer 1 Data

Objective Data Substrate

What exists, when it arrived, where it was addressed. Documents, messages, records — placed on the timeline as they arrived, classified through 42 Dewey taxonomies at ingestion time.

Dewey Mesh Document Ingestion Entity-Bounded Context 4,565+ Classifications
Layer 2 Causal

Objective Causal Substrate

How things connect. What caused what, what references what, what flows where. Follow the money, trace the decision chain, map dependencies. All deterministic, all auditable.

Causal Tracing Financial Plumbing Dependency Mapping Payload Adapters
Layer 3 Experiential

Experiential Substrate

What it was like. Affect, judgment, narrative significance. Built on Layers 0–2 but clearly labelled as interpretive. Enables characterful simulation, training scenarios, and perspective reconstruction.

Signal Adapters Persona Definitions Sentiment Channels LLM Use-Case Routing

Every RAG pattern gets the architecture backwards.

Story RAG: the inversion

The industry adds complexity at query time to compensate for naive ingestion. TNT inverts this: sophisticated ingestion enables simple interrogation. We call this Story RAG.

01

Naive RAG

Query Vector Search LLM Response
IngestionQuery
Embed and pray. No temporal awareness. Layer 1
02

Hybrid Search + Reranking

Query BM25 + Semantic Reranker
IngestionQuery
Better retrieval, still no context of who or when. Layer 1
03

Sub-querying + Routing

Query Decompose Multi-Source
IngestionQuery
Smarter queries, still dumb data. Layer 1
04

Graph RAG

Query KG + Vectors LLM
IngestionQuery
Closest to TNT. But graphs are inferred, not declared. Layer 1–2 partial
05

Corrective RAG

Query Retrieve Evaluate Re-retrieve
IngestionQuery
Patches bad retrieval with retry loops. Layer 1
06

Multi-Agent RAG

Query Planner Agents Judge
IngestionQuery
Maximum query-side complexity. No structured substrate. Layer 1
Pattern 7 Story RAG · Temporal Causal Architecture

Sophisticated Ingestion

80%

Complexity lives here. Documents understood, classified, and connected before any query.

📄 Entity-bounded interpretation
🏷️ Dewey mesh classification
⏱️ Temporal indexing
🔗 Causal connection tracing
📊 Epistemological layering

Structured Substrate

████

What traditional RAG doesn't have: a persistent, queryable, temporally-organised reality.

L0 Temporal primitives
L1 Data substrate
L2 Causal substrate
L3 Experiential substrate
🔀 Multiversal branching

Simple Interrogation

20%

Queries are cheap because the hard work was done at ingestion.

"What did X know at time T?"
"How did we get here?"
"What if Y had happened?"
"Show me the money trail"
"Replay the decision"
Traditional RAG
Simple ingestion → Complex queries
TNT Story RAG
Complex ingestion → Simple queries

Framework soup versus unified temporal infrastructure.

Traditional agentic stack vs TNT

Traditional agentic systems bolt together 8–12 independent tools with custom glue code. TNT provides a single temporal substrate where agents, data, decisions, and causality coexist natively.

Traditional: 8–12 tools, glued together

Every layer is a different vendor, different API, different data model. You own the integration debt.

Orchestration
Agent Framework
Manages agent loops, tool calls, retries. No memory between runs.
LangChainCrewAIAutoGen
· · · glue · · ·
LLM Layer
Model Provider
Stateless API calls. Vendor lock-in per model.
OpenAIAnthropicGemini
· · · glue · · ·
Retrieval
Vector Database
Embeddings in, similarity out. No temporal indexing.
PineconeChromaWeaviate
· · · glue · · ·
Memory
Key-Value / Chat History
Conversation buffer. No structured state.
RedisMem0Zep
· · · glue · · ·
Knowledge
Graph Database
Manually constructed. Inferred relationships.
Neo4jGraphRAG
· · · glue · · ·
State / Workflow
Task Queue + Database
No temporal branching. No replay.
CeleryTemporal.ioPostgres
· · · glue · · ·
Observability
Logging + Tracing
After-the-fact reconstruction only.
LangSmithDatadog

TNT: one substrate, four layers

Every capability emerges from the same temporal primitives. No glue. Configuration, not code.

Application
TOML Presets
Entire applications defined in configuration. No framework code.
Vultures' VaultVisa ModellerOrg Simulator
Layer 3 · Experiential
Signal Adapters + Personas
Interpretive inference. Affect modelling. Clearly labelled as hypothesis.
LLM routingPersonasSentiment
Layer 2 · Causal
Payload Adapters + Decision Chains
Deterministic cause and effect. Audit-grade causal graphs.
Causal tracingFinancial plumbing
Layer 1 · Data
Ingestion + Dewey Classification
42 taxonomies, 4,565+ entries. Entity-bounded. Classified at ingestion.
Dewey meshDoc ingestion
Layer 0 · Temporal Kernel
Studios, Channels, Realities, QuantumFoam
Time as foundational axis. Point-in-time state. Branching realities. Deterministic replay.
StudiosChannelsRealitiesTSQL

Why traditional stacks break

🔌
Traditional
Integration Debt
Every tool speaks a different language. Glue code rots. Upgrades cascade breakage.
🧠
Traditional
Amnesia by Design
Agents forget everything between runs. No agent can answer "what did I know at time T?"
🎲
Traditional
No Replay, No Audit
Can't reproduce a past decision. Observability is after-the-fact log scraping.
👤
Traditional
Agents ≠ Employees
No role identity, no organisational position, no typed relationships with other agents or humans.
TNT
Zero Integration
One substrate. Studios, Channels, Realities are kernel primitives. Nothing to glue.
🕐
TNT
Total Temporal Memory
Every entity accumulates state across time. "What did agent X know at 3pm?" is trivial.
🔁
TNT
Deterministic Replay
Same inputs, provably same outputs. Every decision traceable. Audit-grade by default.
🏢
TNT
Uniform Employee Model
Humans and AI share identical abstractions. Roles are first-class entities.

You don't have to rip out your existing stack.

Shadow mode deployment

Shadow mode runs alongside your current systems, ingests the same data, and builds an auditable evidence trail of what TNT would have done differently. Zero risk. SQL-queryable proof.

Day 1
Shadow Feed
Ingest data from your existing systems. No changes to production.
CREATE SHADOW FEED invoices FROM EXTERNAL 'erp.invoices';
Day 2–90
Parallel Run
TNT processes everything your legacy system sees. Compares every decision.
SELECT * FROM SHADOW_COMPARE( 'invoices', 'legacy_approvals', 'approval' );
Day 91
Evidence Report
SQL-queryable proof of every divergence, with full causal explanation.
SELECT divergence, tnt_reasoning FROM SHADOW invoices WHERE tnt_correct = true;
When Ready
Promote
Gradual cutover. Promote feeds as confidence builds.
PROMOTE SHADOW FEED invoices TO PRODUCTION WITH VALIDATION ( min_match := 0.95 );

Head-to-head

Capability Traditional Agentic Stack TNT Temporal Stack
Temporal queries Not possible — data is atemporal Native — "what did X know at time T?"
Causal tracing Not tracked — log scraping after the fact Layer 2 — every connection explicit and auditable
Deterministic replay Not possible — stateless by design Native — same inputs, provably same outputs
Entity-bounded context All context is generic Documents interpreted per role/entity
Fact vs hypothesis Undifferentiated outputs Epistemological layers — auditable separation
Counterfactuals Not possible — single timeline Branching Realities — fork, compare, merge
Human/AI parity Completely separate systems Uniform employee model — identical abstractions
Configuration Python/JS code for every integration TOML presets — applications without code
Enterprise deployment Big-bang migration or nothing Shadow mode — 90 days of evidence, gradual cutover
Scaling cost Cost grows with query volume Cost amortised across all future queries

Whether you want forensics, infrastructure, or both.

Let's talk

Technical deep-dives, partnership conversations, or a specific question you need answered.

[email protected]