Skip to content
Founder, Logarchéon

William Chuang

Architect of interpretable AI systems · Researcher in secure computation & symbolic dynamics

Logarchéon — geometry-native recursive agent lab

Logarchéon Inc. is a one-human C-Corporation structured as an IP-first research lab. I build and direct recursive teams of AI agents that conduct secure, geometry-native computation—executing proofs, simulations, refinements, and technical writing. Direction, priors, and standards come from decades of human study across physics, mathematics, symbolic systems, and secure design.

This is a research stack, not a product company. It is recursive, local, interpretable, and founder-shaped. The agents do the work; I determine what work matters. Same models ≠ same results—because the judgment layer is not downloadable.

Recursive agent stack
  • Human priors → agent swarm. Geometry/AI/symbolic logic shape the hypothesis space; agents explore it in parallel—each task framed, bounded, and testable.
  • Agents talk to agents. Demos, derivations, and results are passed, critiqued, and improved across agent layers—until convergence or rejection.
  • Recursive loops, not pipelines. Each AI worker can instantiate others, forming self-refining trees of code, reasoning, and simulation. I remain the source of taste and direction.

Why it matters This isn’t “agent-first.” It’s recursive, founder-calibrated intelligence. The same agents in other hands won’t produce the same frontier results.

Guiding commitments

Two mottos I try to live
  • tuitio fidei — the defense and cultivation of the faith through study, prayer, and witness.
  • obsequium pauperum — service to the poor and the sick through concrete, steady works of mercy.

Orientation These principles shape how I design systems: rigor as fidelity; security as protection of human dignity; learning as formation under constraint.

Institutional model — recursive IP lab, C-Corp

Logarchéon Inc. is a C-Corporation operated by a single human founder with a recursive stack of autonomous AI agents—each trained, directed, and coordinated to execute tightly scoped research and engineering tasks. Wherever this site says “we” or “our”, it means the founder + agents working in unison to produce original, defensible intellectual property.

Summary One founder + recursive AI agents generating proof-backed IP in geometry, symbolic reasoning, and secure computation. Partners bring scale; Logarchéon provides the design.

One-liner

I build interpretable, certifiable transformer systems that train faster, edit safely, and can run inside secure, geometry-native cryptographic wrappers.

Goal rigor + practice. Each idea comes with proofs or certificates, plus runnable prototypes (see GitHub) and small-scale demos.

What I work on

CEAS

a control-theoretic attention scaling method (β as inverse temperature) to cut training steps and improve stability.

Finite-Machine Lift (DFA/PDN)

decompose trained transformers into cycles and transients for traceable, symbolic reasoning and safe edit-without-retrain.

MIA (GRAIL-derived)

metric-invariant architecture built on invariants of groups (not equivariance). Inputs, machine state, and outputs move together (diagonal action), so behavior is preserved while internals vary—enabling twin deployments.

Is new hardware required? Usually not. Software blocks compute invariants. If acceleration is desired later, FPGAs and mature-node chips implement these features efficiently.

GRAIL

algebraic/geometry-native secure execution for cryptomorphic twins and invariant-first computation; pairs naturally with MIA at the system/ISA layer.

Λ-Stack

end-to-end architecture combining CEAS + finite-lift + MIA/GRAIL for interpretable, hardened models and systems.

Overview

I design at the intersection of geometry, control, and security. Recent work treats attention temperature β as a controllable parameter; lifts trained models into finite operators for certified edit-without-retrain; and develops MIA (GRAIL-derived), a metric-invariant architecture where invariants of groups drive computation. Inputs, machine state, and outputs transform together (diagonal action), preserving behavior while enabling cryptomorphic twin deployments. Adoption starts in AI (LLMs/transformers/NNs) and extends to the ISA/system layer without new hardware; software shims compute invariants today, with optional FPGA or older-node acceleration later.

Three pillars

1) Efficiency (CEAS)

Treat attention scaling as β. Early, budgeted search; clamp in a corridor; monitor entropy/energy. Faster steps-to-target and improved numerical behavior.

2) Interpretability (Finite-Machine Lift)

Lift closed-loop behavior to P = D + N (cycles + transients). Trace reasoning; detect degeneracy; certify edit-without-retrain.

3) Security & Substrate Portability (GRAIL → MIA)

Invariant-first compute with diagonal transport yields twin deployments and drop-in adoption on existing machines. Most use-cases do not require sub-100 nm fabs; FPGAs and 28–180 nm nodes suffice for acceleration.

Featured frameworks

CEAS — Critical Entropy Attention System

  • What: A β-thermostat for attention with a target entropy corridor and a guarded distance from the pseudo-critical point.
  • Why: The textbook 1/√dk is only an initialization convenience; heads/layers/datasets drift and fixed scaling becomes brittle.
  • How: Initialize with \(\displaystyle \beta^\star=\frac{1}{\sigma_{qk}}\sqrt{2\ln N_{\mathrm{eff}}}\), then apply a one-step (Newton/PI) update on attention entropy toward a target band, enforce a small gap \(u=\tfrac{|\beta-\beta_c|}{\beta_c}\ge u_{\min}\) to avoid critical slowing down, and use a lightweight gate \(T=\beta\,\sigma_{qk}\sqrt{2\ln N_{\mathrm{eff}}}\) to skip low-information tokens/heads with safety floors/back-off.
  • Artifacts: β initializer + controller code, entropy/variance monitors, gating masks, proofs/derivations, and small-task demos.

DFA / Finite-Lift — Decompose for traceability

  • What: Determinize rollout → lift to P → split P = D + N → build cycle projectors.
  • Why: Produce symbolic traces and stable subspaces for safe, local edits.
  • How: Runtime sentinels (O(1) membership), systole/holonomy monitors, and certificates reporting coverage/time/memory.

GRAIL — geometry-native secure execution (research)

  • What: Algebraic/geometry-native encodings intended to keep parameters/activations opaque to untrusted CPUs.
  • Why: Explore trustless execution paths outside TEEs and without FHE-class slowdowns.
  • Status: Theory + prototypes; seeking independent benchmarking and review. Performance claims are workload-dependent and provided with caveats.
  • Twin models: Generate cryptomorphic variants (weight-distinct, function-equivalent) for per-user instances and anti-fingerprinting.

Λ-Stack — The integrated system

CEAS (β-control) + finite lift (trace) + GRAIL + optional dual-lock security (GRAIL plus MSIA) = an interpretable, hardened transformer. Designed to be edit-ready and auditable.

MIA — Metric-Invariant Architecture

  • What: invariant-of-groups compute (functions of distances/ratios/Hamming/cross-ratios, etc.) with diagonal transport of inputs, machine, and outputs.
  • Why: twin deployments (function-identical, internally distinct), fewer recalibrations in control, and acceleration without bleeding-edge fabs.
  • How: software shim today; optional RISC-V micro-ops and FPGA/older-node DEUs (add/shift/XOR/popcount/LUT, small CORDIC) later.

Evidence & claims — how to read

Twin encoders & message-mode security

Degenerate twin design (non-unique encoders & decoders)

The encoder–decoder is not a single fixed pair. It lives inside an infinite family of mathematically equivalent pairs (“twins”) that all produce the same outputs for the same inputs while looking different inside. If an adversary copies one encoder, the system is still not uniquely identifiable: many decoders are compatible in principle, but only the paired twin will successfully decrypt. Observing encoded outputs does not reveal which twin is in use, and rotating to a new twin can be done at runtime without changing external behavior.

Plain English: Even if someone steals one encoder, they still don’t have the “matching decoder.” There are infinitely many look-alike versions. You can switch to a fresh twin on the fly, frustrating reverse engineering and chosen-plaintext probes.
  • Obfuscation by design: Non-uniqueness makes parameter recovery and architecture inference unstable from samples alone.
  • Operational agility: Rotate twins proactively or after any suspected exposure; external I/O remains unchanged.
  • Works with standard crypto: Complements TLS/VPN and storage encryption; not a replacement for basic hygiene.

Encoder–decoder security (message mode)

In secure message-passing mode, the encoder maps a message into a curved-space representation built from hyperbolic distances and then filters it through automorphic (Maass-type) kernels assembled via truncated Poincaré series. The result is a nonlinear composition that does not admit a straightforward inverse. Even if the encoder parameters are captured, turning encoded messages back into plaintext without the correct paired decoder requires solving a difficult inverse spectral problem on a curved space.

Plain English: The message is scrambled using curved-geometry distances and special waveforms. Without the exact partner decoder, “unscrambling” it is a dead-end computation. Capturing the encoder alone isn’t enough.
  • Resilience after compromise: Because infinitely many twins exist, a compromised encoder can be retired and replaced instantly; the replacement remains functionally identical to the outside world.
  • Deployment hygiene matters: On air-gapped or equivalently isolated machines, external side-channels are strongly reduced. In connected environments, pair this with standard hardening (audit logs, rate limits, constant-time paths where feasible).
  • Independent scrutiny welcome: Security depends on the hardness of the underlying geometric inverse problem and on sound implementation. External cryptanalysis and red-teaming are invited.

Selected results (concise)

Product brief — Λ-Stack

Why now: training/retraining costs, auditability requirements, and model protection pressures are rising together.

What you get:

  • Faster training: CEAS reduces tokens-to-target via informed β control.
  • Safer updates: symbolic traces + wrappers enable hour-scale fixes without a full retrain.
  • Security by design: optional geometry-native masking and per-tenant cryptomorphic twins.

Who benefits: critical infrastructure, national missions, regulated enterprises, and labs seeking explainable and hardened LLMs.

Product brief — MIA: what changes today

Drop-in, no rip-and-replace

Adopt as a software shim: compute invariants (distances, ratios, Hamming counts, cross-ratios) and feed existing controllers or models with those numbers instead of raw coordinates/IDs.

Is new hardware required? Usually not. Software blocks compute invariants. If hardware acceleration is desired later, FPGAs and even older-node chips compute these features efficiently.
From AI → whole stack

Started inside LLMs/transformers and neural nets; now extended to the machine/ISA layer so that CPUs/PLCs can run with the same invariant-first semantics. When the platform runs in this style, everything on it inherits the twin property (cryptomorphic, function-identical deployments).

Tolerance & uptime for robotics/PLC
  • Drift-proof control: use distances/angles/ratios and order-free aggregates—not absolute X/Y/Z or sensor IDs.
  • Graceful dropouts via redundant invariants: one probe fails, the control signal persists.
  • Looser fixtures: larger mechanical tolerances without re-teach; fewer stops for recalibration.
Compute without bleeding-edge fabs

Invariant evaluation prefers add/shift/XOR/popcount and small LUTs or CORDIC; giant multiplier arrays are optional. Most use-cases do not demand sub-100 nm nodes—28–180 nm, FPGAs, or mixed-signal blocks are viable.

Security by construction (twins)

Inputs, machine state, and outputs transform together under group action (diagonal move). Behavior is preserved while internal encodings differ—per-device/site twins for anti-cloning and licensing, compatible with GRAIL-style deployments.

Less memory/interconnect pain

Many “reorders” compile to address math (orbit-in-place) rather than physical data shuffles—lower traffic, lower energy, simpler scaling.

Exactness is testable
  • Format-true: reproduce standard int/fp results bit-for-bit (including IEEE-754 flags).
  • Approx + tiny fix-up: cheap core + ±1 ulp correction LUT → still bit-identical on all cases.
Adopt this week
  1. Pick a multiply-heavy kernel (or a drift-sensitive control loop).
  2. Swap raw measurements for invariants (L1/Hamming/cross-ratio/ratios).
  3. Wrap the ABI once (encode/decode), keep apps/tests unchanged.
  4. Optional: add “orbit-in-place” address transforms for common reorders.

RISC-V / PLC Works as a micro-op overlay (DISTMUL, DISTCMP, KERNELACC) or as PLC function blocks (DISTCMP, RATIO, REDUCE, INVPID).

Research notes & essays (selected)

Full list: papers, slides, lecture notes, and code are indexed on the Research page.

Market & strategy (opinions)

I discuss AI efficiency shocks and strategic implications for semiconductor competitiveness. These sections are analysis/opinion; where numbers are cited, I aim to provide sources or reproduce calculations. I welcome independent critique.

Bio

I work at the seam of geometry, control, and security. Formation in math/physics/CS; influenced by monastic stillness and service traditions. I view security as preservation of dignity under uncertainty and learning as formation under constraint.

Ethos & scope

This site presents personal research and vocation-aligned work. It does not use or trade on any religious order’s name, logo, or endorsement, and it offers no goods or services under such names. Views and materials here are my own.

Contact