Answers only from what it can prove — or it stops.

Most AI systems generate responses. Eidolon checks whether a response is authorized.

A governed authority engine, not a chatbot. Authority may not answer. Only verified sources may.

Think of it like a reference librarian who only answers from sources they’ve personally verified — and says “I can’t confirm that” rather than guessing. Eidolon works the same way: it accepts proposals from many sources, but only trusted memory or direct documentary support may publish an answer.

An upstream LLM may polish phrasing. It may not grant authority. That line between proposal and publication is what defines the system.

Your question Memory and evidence checked Verified answer with citation — or a clear stop

Answer · Cite · Stop

Answer

When trusted memory supports it — installed, validated capabilities that have passed a governed pipeline.

Cite

When direct evidence supports it — exact text from a governed source. Retrieval alone is not permission to answer.

Stop

When neither does. The system abstains with a reason — a diagnosis of missing authority, not a guess dressed as help.

At a glance
  • Not a chatbot — a governed layer that checks sources before answering.
  • Answers only from verified memory or direct evidence.
  • No verified source? It abstains — and tells you why.

What Eidolon is not

  • Not a general-purpose chatbot.
  • Not a model that guesses first and cites later.
  • Not a UI that smooths over uncertainty.

Credibility you can inspect

Governed memory Verified evidence Explicit abstention Operator receipts

Every outcome carries a governed envelope: the authority that granted it, the method used, and the reason code. If no authority exists, the system stops and reports what would be needed. Expand any turn in operator mode and you get the same structure: nothing is smoothed away.

Architecture at a glance

Intake Structured input, validated before authority is consulted.
Authority resolution Memory first, evidence second, abstain third.
Governed envelope Method, provenance, reason codes — attached to the outcome.
Constrained rendering The UI shows what the authority layer granted. It does not freelance.

Who this is for

Eidolon is built for people who need AI behavior they can inspect, constrain, and improve without guessing what changed.

  • Research groups and AI labs evaluating trust architectures
  • Operators building high-stakes or regulated-adjacent systems
  • Teams that care whether an answer is authorized, not just fluent
  • Anyone tired of plausible-but-unverifiable output
Live Demo is not a general assistant.
  • Responses are published only from trusted memory or verified evidence.
  • Abstention (could not verify) is normal when your question is outside installed scope.
  • Toggle Operator to see per-turn receipts: authority, method, reason codes, provenance.

Type a question. If Eidolon has a verified answer, it shows where it came from. If not, it stops — that is the point.

E
Ask something Eidolon can verify
Eidolon answers only from trusted memory or verified evidence. Outside that scope it abstains — that is the expected behavior, not a bug.
Starter prompts
QuestionWhat is the safe minimum internal temperature for cooked poultry in the United States?
Quoted line"to be or not to be" (include the quotation marks)

Eidolon will answer only from trusted authority. Otherwise it will abstain.

Proof Replay

Proof Replay is a recorded artifact from an earlier frozen evaluation pack. It still demonstrates the same core rule Eidolon uses today: answer when authorized, cite when directly supported, stop when support is missing. It is historical proof material about that authority model — not a live product surface. Use Live Demo for the current live engine.

Think of this page as supporting evidence for how the governed authority layer behaves under a specific proof pack (including philosophy-sourced prompts). The rule under test matches today’s model: trusted memory and direct evidence may publish; everything else abstains. The right column is a period baseline LLM without that obligation — contrast only, not the current Eidolon stack.

Pick a prompt from the frozen pack below. Left: what the recorded authority path did for that prompt. Right: baseline model text from the same capture (not held to proof). Judge verification, not style — the pack is narrow on purpose.

Retrieval is not publication. This replay shows the same discipline the live system follows: publish only with verified support (trusted-memory solver or direct evidence). Otherwise the recorded path stops and records what was missing.
Contract under test: no supported path → no published answer.
Tip: under “Recommended,” try the first two prompts (supported → cannot verify) to see the contrast in one pass.
Frozen artifact · Static replay · Not a live query surface
Choose a prompt from the frozen pack
Prompt

        
Replay capture (authority path)
Baseline LLM (unverified)
⚠️ Not held to proof — often answers anyway.
Recorded authority path
Frozen pack (Jan 2026): publish only with verified support; otherwise abstain.
v1.0 captured 2026-01-05
Baseline LLM
ChatGPT 4o (tools off) captured 2026-01-05
How to read this page: historical side-by-side proof of the publication contract — not a showcase of everything the live engine covers today. For current behavior, use Live Demo.

FAQ

At a glance
  • Not a chatbot — governed authority engine.
  • Publishes only from trusted memory or verified evidence.
  • Otherwise abstains with inspectable reasons.
  • Try Live Demo · How It Works · Research
Is this just RAG? No — retrieval ≠ permission.

No. RAG often narrates over “relevant” chunks. Here, memory.* or direct evidence must authorize publication. Overlap without proof does not ship. See How It Works → trusted vs helper.

Is this an LLM wrapper? Models propose, they don’t publish.

LLMs can structure or rephrase. They cannot substitute for governed memory or evidence. “Fluent” never overrides the envelope.

What happens when Eidolon does not know? Stops cleanly.

It abstains (e.g. could not verify). In Operator mode you get authority, method, and reason codes — a diagnosis, not padding. Try it on Live Demo.

How does new knowledge enter? Promotion, not drift.

Evidence → validation → human-gated promotion → trusted memory in the system. Telemetry may recommend work; it does not auto-promote silently. Flow: How It Works → growth.

What do abstentions mean? Expected behavior.

No trusted surface matched — so nothing publishes. That is the honest outcome and the input to governed expansion review.

What’s installed right now? Snapshot.

Frozen solver domains, promoted facts, evidence citation paths — counts on How It Works. Proof Replay shows behavior on a fixed prompt set.

Why does it stop instead of helping more? No renderer freelancing.

Extra prose without authority hides uncertainty. If you need more answer, the system needs more proof — not more words.

Why is Eidolon narrow today? Trust before breadth.

Wide guessing surfaces produce plausible, unverifiable output. Narrow scope reflects what is actually installed and proven. Breadth arrives only through governed promotion — not vibe. Historical note: earlier “explore everything” directions are superseded by the current governed authority model (see Research).

Not eidolon-ai (eidolonai.com) Different product.

This site — Eidolon AI Labs, governed authority engine. eidolonai.com — open agent framework. Same name, unrelated goals.

Live Demo · How It Works · Research · Proof Replay

How It Works

Eidolon separates proposal from publication. Everything starts from this spine:

Intake Authority resolution Governed envelope Constrained rendering

What enters the system?

Your question is structurally validated first. Inadmissible input can abstain before memory or evidence run — so the authority layer is never asked to “save” bad input.

How is authority resolved?

Strict order, no mixing:

  1. Memory — does governed capability answer with a memory.* method?
  2. Evidence — does the corpus support the claim with direct text (cite-or-abstain)?
  3. Abstain — if neither authorizes, stop with a reason (e.g. no trusted authority).

There is no third path that fabricates an answer from vibes or “close” retrieval.

What may the renderer do?

Show only what the envelope allows: method, authority, citations, abstention reasons. It does not freelance helpful paragraphs around a thin proof.

What do receipts show?

On Live Demo, turn on Operator for structured fields (status, authority, method, blocked surfaces, reason codes, provenance). Same idea as this example:

Status
ANSWERED
Authority
evidence
Method
evidence.QUOTE_MATCH_ALL
Reason code
Blocked surfaces
parse.*, weak_retrieval
Provenance excerpt
[exact span from governed corpus]

Trusted vs helper-only

May publish Helper only
memory.*, memory.fact_lookup, evidence.QUOTE_MATCH_ALL (when direct support is proven). parse.*, weak overlap, shell templates. Assist only — never publication authority.

Rule: an LLM may polish wording; it may not grant authority.

How does capability grow?

Policy and humans review telemetry; nothing silently promotes. In plain terms:

questions → patterns (abstain / repeat evidence) → policy review → human approval → validation → installed capability

Not “the model drifted.” Logged, gated steps.

What is installed now?

Snapshot (Apr 2026): five memory-backed solver domains, three evidence-backed documentary domains, 34 promoted facts, Live Demo + Proof Replay surfaces, operator receipts. Narrow on purpose — see FAQ: Why narrow?

Research

Reference material for auditors and builders — not the marketing layer. Start at Home or How It Works.

System notes

Layered stack: structural intake → memory solvers (memory.* gate) → evidence direct-support → append-only promotion artifacts → FastAPI Live Demo + query telemetry for expansion review.

  • Publication truth is code-enforced, not asserted in copy alone.
  • Lane A (frozen) is the public proof surface; experimental lanes remain policy-gated.

Architectural boundaries

Authority. Many components propose; only governed memory and verified evidence publish.

Proof vs plausibility. Abstain when support is missing — fluency is not a substitute.

Computation limits. Some questions stay undecidable; refusal beats synthetic closure.

Historical framing. Earlier “wide assistant” exploration is superseded by the governed authority model — not a current product claim.

Research verdicts

Design stance (current)
Demoted / avoided
  • Answer-first, verify-never habits
  • Confidence without receipts
  • Silent capability drift
Active
  • Authorized surfaces only
  • Operator-visible receipts
  • Governed promotion for growth

Archive pointers

  • eidolon_integration/FREEZE_CHECKPOINT.md — frozen slice (UCS repo).
  • data.js — Proof Replay bundle (captured 2026-01-05).