This page is a public architecture preview of the conceptual architecture behind Golem and Golem Physics, the first working domain Golem: a verified geometric lattice for inference systems.
The mechanism is simple: information becomes claims, claims become coordinates, and geometry decides whether the system can crystallize, reject, preserve tension, keep researching, or remain silent.
"A claim enters as text. It lives as geometry. Speech is only one possible state."
Knowledge is modelled as a constraint-native truth lattice — a geometric structure where every verified claim holds position, provenance, domain placement, status, tension state, volatility, immutability, and temporal history.
Resonance measures how well a candidate claim aligns with the existing lattice manifold. A claim can crystallise, remain proposed, be rejected, preserve tension, trigger research, or fall silent.
The lattice is seeded with immutable anchors: physical constants, mathematical identities, logical primitives, thermodynamic laws, and epistemic rules. Every node must be evaluated against this bedrock and its neighbors.
Golem runs through coupled loops that maintain state, regulate thresholds, ingest new material, guide learning, and perform dream-cycle work. The public claim is coherence and epistemic discipline, not proof of consciousness.
Each dream cycle is the organism's primary cognitive work. It fires from three independent triggers: verification queue overflow, tension accumulation, or energy-high state. It runs entirely autonomously.
Every interface is purpose-built for a specific cognitive function. Together they form an inspectable epistemic workstation.
The extension brings Golem's epistemic filter to every page you read — AI chat responses, arXiv papers, Wikipedia articles. Sentence-by-sentence verification without leaving the page.
The Oracle interface is the bounded speech surface over the lattice. Treat the endpoints below as architecture and reviewer-shape documentation unless a fresh local run confirms the service is online for a demo.
Python 3.11+ required. 8 GB RAM minimum (16 GB recommended). Ollama for local embeddings, OpenRouter for LLM access.