Skip to content
Founder, Logarchéon

William Chuang

Architect of interpretable AI systems · Researcher in secure computation & symbolic dynamics

I build self-correcting, interpretable transformer systems that train faster, edit safely, and can run inside geometry-native, twin-capable cryptographic wrappers.

Orientation

Vocation: geometry, control, and secure computation ordered to human dignity—so that precision serves peace and systems remain auditable under uncertainty.

Working model: proof-backed prototypes and non-enabling public summaries; deeper technical materials available via NDA/export-control compliant channels for licensing or joint development.

One-liner

I build interpretable, certifiable transformer systems that train faster, edit safely, and can run inside secure, geometry-native cryptographic wrappers.

Goal Rigor + practice. Each idea comes with proofs or certificates plus runnable prototypes and small-scale demos.

Evidence & claims — how to read

  • Performance (GRAIL). GRAIL performs the same math as a plaintext model and therefore runs at essentially plaintext speed. Any comparison to fully homomorphic encryption (FHE) or multi-party computation (MPC) reflects the overhead added by those schemes, not by GRAIL. Exact slowdowns for FHE/MPC depend on scheme, precision, hardware, and task.
  • Scope of protection. Parameters, activations, and latent states are protected via a group-action / symbolic-geometry transform; both the model and the data remain encrypted in use. Security is deployment-dependent: once the system is operated inside a SCIF or an equivalently isolated posture (air-gapped, shielded/TEMPEST-class enclosure, no external I/O), external side-channels (network beacons, vendor telemetry, remote DMA, RF/EM egress) are greatly reduced for that system. Before such isolation, standard side-channel and operational risks apply, and routine hardening (I/O policy, audit logging, rate-limits, constant-time paths where feasible) remains required.
  • Stability & edit safety. Each version includes entropy-corridor bounds, deviation limits, and edit-stability certificates (time, memory, parameter coverage) showing that controlled local edits do not trigger unbounded downstream drift.
  • Demonstrations & audits. I show real-time control of the inverse-temperature parameter β and verified edit-without-retrain behavior. Independent benchmarking, replication, and third-party cryptographic review are welcome under representative workloads.

Three pillars — what they are and what they provide

1) Efficiency (CEAS)
  • What it is. CEAS (Critical Entropy Attention System) is a control-theoretic attention scaling scheme that treats the scaling factor as an explicit inverse temperature β and keeps attention entropy inside a target corridor.
  • What it provides. Fewer steps-to-target and more stable training—especially on long-context or brittle setups. Instead of a fixed 1/\sqrt{d_k} guess, CEAS actively steers attention away from over-sharp and over-flat regimes and surfaces entropy/energy diagnostics you can monitor and audit.
  • How / when you use it. Drop it in where you would normally hard-code attention scaling: during pre-training, fine-tuning, or domain adaptation when training budget and numerical stability matter. It can run in a “search then clamp” mode (early β search, then fixed) or a lightweight online-control mode with entropy feedback.
2) Interpretability (Finite-Machine Lift)
  • What it is. Finite-Machine Lift is a procedure that takes the closed-loop behavior of a trained model and lifts it to a finite operator P = D + N, decomposed into cycles (D) and transients (N), with runtime sentinels and cycle projectors.
  • What it provides. Symbolic traces of how the system actually behaves, stable subspaces in which local edits can be made, and edit-without-retrain guarantees: you can patch logic or behavior while bounding deviation on a protected manifold instead of paying for a full retrain.
  • How / when you use it. Apply it to a model that is already trained and deployed (or about to be deployed) when you need explainability, post-hoc audits, or targeted fixes: safety-critical updates, policy changes, red-team findings, or compliance-driven edits.
3) Security & Substrate Portability (GRAIL → MIA)
  • What it is. GRAIL → MIA is a security and substrate-portability layer that centers computation on invariants of groups and diagonal transport: inputs, machine state, and outputs move together under group action, while internal encodings remain flexible and non-unique.
  • What it provides. Cryptomorphic twins (weight-distinct, function-identical deployments), hardware freedom (runs in software today, accelerates on FPGAs or mature nodes later), and a reduced attack surface: stealing one instance does not give a unique blueprint for all others.
  • How / when you use it. Bring it in at the deployment and systems layer when models or controllers must run in regulated, high-assurance, or IP-sensitive environments. Start with software-only shims; move to ISA/PLC overlays or custom blocks when you want higher throughput or per-device twin hardening.

Product brief — Λ-Stack

Why now. Training/retraining costs, auditability requirements, and model protection pressures are rising together.

What you get.

  • Faster training. CEAS reduces tokens-to-target via informed β control.
  • Safer updates. Symbolic traces + wrappers enable hour-scale fixes without a full retrain.
  • Security by design. Optional geometry-native masking and per-tenant cryptomorphic twins.

Who benefits. Critical infrastructure, national missions, regulated enterprises, and labs seeking explainable and hardened LLMs.

Logarchéon — geometry-native recursive agent lab

Logarchéon Inc. is a one-human C-Corporation structured as an IP-first research lab. I build and direct recursive teams of AI agents that conduct secure, geometry-native computation—executing proofs, simulations, refinements, and technical writing. Direction, priors, and standards come from years of human study in physics, mathematics, symbolic systems, and secure design.

This is a research stack, not a commodity product company. It is recursive, local, interpretable, and founder-shaped. The agents do the work; I determine what work matters. Same models ≠ same results—because the judgment layer is not downloadable.

Recursive agent stack
  • Human priors → agent swarm. Geometry/AI/symbolic logic shape the hypothesis space; agents explore it in parallel—each task framed, bounded, and testable.
  • Agents talk to agents. Demos, derivations, and results are passed, critiqued, and improved across agent layers—until convergence or rejection.
  • Recursive loops, not pipelines. Each AI worker can instantiate others, forming self-refining trees of code, reasoning, and simulation. I remain the source of taste and direction.

Why it matters This is not “agent-first” hype. It is recursive, founder-calibrated intelligence. The same agents in other hands will not produce the same frontier results.

Institutional model — recursive IP lab, C-Corp

Logarchéon Inc. is a C-Corporation operated by a single human founder with a recursive stack of autonomous AI agents—each trained, directed, and coordinated to execute tightly scoped research and engineering tasks. Wherever this site says “we” or “our”, it means the founder + agents working in unison to produce original, defensible intellectual property.

Summary One founder + recursive AI agents generating proof-backed IP in geometry, symbolic reasoning, and secure computation. Partners bring scale; Logarchéon provides the design.

Overview

I design at the intersection of geometry, control, and security. Recent work treats attention temperature β as a controllable parameter; lifts trained models into finite operators for certified edit-without-retrain; and develops MIA (GRAIL-derived), a metric-invariant architecture where invariants of groups drive computation. Inputs, machine state, and outputs transform together (diagonal action), preserving behavior while enabling cryptomorphic twin deployments. Adoption starts in AI and extends to the ISA/system layer without new hardware; software shims compute invariants today, with optional FPGA or older-node acceleration later.

MIA — what it is and what it provides

Summary — what / what for / how
  • What it is. MIA (Metric-Invariant Architecture) is an invariant-first compute style: decisions are made from group-invariant quantities (distances, ratios, Hamming counts, cross-ratios, etc.), and inputs, machine state, and outputs transform together (diagonal action) under group symmetries.
  • What it provides. Stable behavior across coordinate changes and reparameterizations; per-device or per-tenant cryptomorphic twins; better tolerance for noise, drift, and fixture slop; and an easier path to move the same logic across CPUs, PLCs, FPGAs, or mature-node ASICs without constant re-teaching.
  • How / when you use it. Start by inserting MIA as a software shim around one kernel or control loop: compute invariants, feed the existing controller/model, and compare tolerance, uptime, and attack surface. When you see the gains, you migrate more subsystems or push the pattern down into the ISA/PLC level for higher throughput and stronger per-device twin guarantees.
Drop-in, no rip-and-replace

Adopt as a software shim: compute invariants (distances, ratios, Hamming counts, cross-ratios) and feed existing controllers or models with those numbers instead of raw coordinates/IDs.

Is new hardware required? Usually not. Software blocks compute invariants. If hardware acceleration is desired later, FPGAs and even older-node chips compute these features efficiently.
From AI → whole stack

Started inside LLMs/transformers and neural nets; now extended to the machine/ISA layer so that CPUs/PLCs can run with the same invariant-first semantics. When the platform runs in this style, everything on it inherits the twin property (cryptomorphic, function-identical deployments).

Tolerance & uptime for robotics/PLC
  • Drift-proof control: use distances/angles/ratios and order-free aggregates—not absolute X/Y/Z or sensor IDs.
  • Graceful dropouts via redundant invariants: one probe fails, the control signal persists.
  • Looser fixtures: larger mechanical tolerances without re-teach; fewer stops for recalibration.
Compute without bleeding-edge fabs

Invariant evaluation prefers add/shift/XOR/popcount and small LUTs or CORDIC; giant multiplier arrays are optional. Most use-cases do not demand sub-100 nm nodes—28–180 nm, FPGAs, or mixed-signal blocks are viable.

Security by construction (twins)

Inputs, machine state, and outputs transform together under group action (diagonal move). Behavior is preserved while internal encodings differ—per-device/site twins for anti-cloning and licensing, compatible with GRAIL-style deployments.

Less memory/interconnect pain

Many “reorders” compile to address math (orbit-in-place) rather than physical data shuffles—lower traffic, lower energy, simpler scaling.

Exactness is testable
  • Format-true: reproduce standard int/fp results bit-for-bit (including IEEE-754 flags).
  • Approx + tiny fix-up: cheap core + ±1 ulp correction LUT → still bit-identical on all cases.
Adopt this week
  1. Pick a multiply-heavy kernel (or a drift-sensitive control loop).
  2. Swap raw measurements for invariants (L1/Hamming/cross-ratio/ratios).
  3. Wrap the ABI once (encode/decode), keep apps/tests unchanged.
  4. Optional: add “orbit-in-place” address transforms for common reorders.

RISC-V / PLC Works as a micro-op overlay (DISTMUL, DISTCMP, KERNELACC) or as PLC function blocks (DISTCMP, RATIO, REDUCE, INVPID).

Vocation & biography

Founder, Logarchéon · Architect of self-correcting, interpretable AI systems · Researcher in secure computation & symbolic dynamics

I design at the intersection of geometry, learning, and secure systems—where form reveals function and structure encodes meaning. My research seeks mathematically grounded architectures built on symmetry, topology, and spectral dynamics, oriented to the common good and the dignity of the human person. Core applications include interpretable machine learning, privacy-preserving compute, and humanitarian resilience.

Recent projects include transformers governed by Möbius flows and Lie symmetries; Langlands-dual attention layers for structured reasoning; and cryptographic primitives based on modular trace zeta functions and symbolic entropy compression. These are not mere technical novelties—they are durable frameworks intended to preserve coherence and interpretability in adversarial environments.

I treat mathematical rigor as an act of fidelity. Security is not merely defense; it is the protection of dignity under uncertainty. Learning is not only optimization; it is formation through symmetry and disciplined constraint. My work is shaped by physics and number theory and, no less, by a habit of interior stillness.

As the founder of Logarchéon (launching 2025), I develop decision-support frameworks for open-source analysis, cognitive modeling, and secure signal fusion in public-interest and humanitarian contexts. These systems are built so that precision serves peace and information upholds truth, with ethical safeguards consistent with human dignity and responsible stewardship.

My philosophical and spiritual formation is guided by the Cistercian practice of quiet, the Jesuit discipline of service through intellect, and the Order of Malta’s tuitio fidei et obsequium pauperum—the defense of the Faith and service to the poor and the sick. I pursue this work under spiritual direction and in fidelity to the Church.

That formation is grounded in family. My Catholic ancestors in Taiwan, over many generations, supported parish life by donating farmland, hosting open-air banquets, and dedicating our family home as a chapel. War and hardship humbled us, but service endured.

My grandfather and great-grandfather were Catholic, and in our family’s memory—and in local tradition—they and my great-great-grandfather helped offer one house from our family estate to serve as a church in our county, during a period when my great-grandfather was serving in local county leadership before World War II. The act was simple and practical: a place for worship, hospitality, and ordinary parish life.

My father, Prof. John B. Chuang (莊慶信), was baptized in infancy and received much of his formation within the life of the Church. After elementary school he entered a petit séminaire, later continued in major seminary formation, and went on to pursue advanced philosophical study. He earned a Ph.D. in Philosophy at Fu Jen Catholic University, and later served in Catholic higher education under the shared guidance of diocesan leadership and multiple religious communities.

His academic work focuses on religion and ecology, Catholic environmental ethics, religious ethics and bioethics, philosophy of religion, and the traditions of Chinese philosophy, culture, and religion.

In parish life, he also served at Beibin Parish (北濱堂) in Hualien City—a community widely regarded as among the earliest Catholic parishes on Taiwan’s east coast, with over seventy years of parish history. His period of service there (1982–1983) took place within a wider legacy of local devotion. Local accounts recall that the parish began in 1949, when the home of Mr. Chou Chang-yao (周長耀)—a retired ROC air-defense officer and later secretary to Cardinal Yu Bin—was offered for worship and community gatherings. These historical notes are based on publicly available sources (parish/community accounts and other open references).

Selected scholarly themes (John B. Chuang / 莊慶信): This body of work centers on religion and ecology, Catholic environmental ethics and eco-spirituality, religious ethics, philosophy of religion, and the history of Chinese philosophy and culture. Several publications engage Pope Francis’ Laudato Si’, including analyses of its distinctive themes, its relationship to contemporary environmental thought, and its implications for Taiwanese society. Earlier work includes studies on ecological concern in Catholic social thought, Catholic ethics of wealth, religion and bioethics, Taoist reflections on matter and spirit, and environmental justice and indigenous eco-wisdom in Taiwan. His academic publications are available on Google Scholar.

In a similar way, my own formation was shaped by Catholic educational communities over many years—first through St. Ignatius School and later through Jesuit university study—before I carried that discipline into technical work. In total, this accounts for more than a decade of continuous enrollment, not counting the earlier years when our family lived in the Fu Jen campus environment and Catholic university life quietly shaped the rhythms of home.

In that same spirit of quiet continuity, I would like to share a personal note about my name, which carries layers of familial and spiritual meaning. This is something I rarely explain, but here I offer it as part of the deeper witness behind my work.

My original Mandarin name was Tao-Mao, given in honor of St. Thomas Aquinas, as my birthday falls near his feast day. After moving to the United States, I quietly adopted the name Huanshan as part of a new beginning—both personal and spiritual.

The name was not chosen at random. I wished to express appreciation for my father, whose baptismal name is John, given to him in infancy by an American Catholic priest. I hoped to reflect this meaning gently and discreetly. Beginning from the English form John’s sonJohnson—I searched for a way to express that sense of connection across languages, without calling attention to it.

Our family name, , is romanized as Chuang under the Wade-Giles system, which was widely used in Taiwan’s official and ecclesiastical records throughout the 20th century. Its pronunciation is nearly identical to Jhuang, a later spelling adopted under the Tongyong Pinyin system introduced in Taiwan in 2002. The “h” in both versions is not actually pronounced, and when softened, Chuang or Jhuang resembles Juan, the Spanish form of John. I believe this may have been how my father received his baptismal name.

And so, beginning with Juan—pronounced Huan in English—I gradually shaped the Mandarin name Huanshan, which quietly carries the sound and meaning of Johnson, or John’s son. It is not something I speak of often, but for me, it has become a personal way to remember the quiet grace of faith passed down through my family.

Both my grandfather and great-grandfather were Catholic, and I was told that my great-grandfather and his father once donated part of the family home to serve as a night chapel. They also offered simple meals and accommodations to fellow Catholics who came through the area. These small acts of hospitality were not widely known, but they remain deeply meaningful to me. In choosing the name Huanshan, I hoped to carry forward that quiet spirit of faith, service, and continuity.

My English name, William, was chosen for its simplicity. In time, it also became a quiet reminder to follow not my own will, but God’s.

These layers of naming—received, chosen, and remembered—form part of the deeper structure of my work: quiet fidelity, encoded meaning, and service made strong through clarity. As with systems, so with persons: what is named with care may be lived with greater intention.

I welcome collaborations where faith meets rigor—where work is not only excellent, but ordered to charity and truth for the good of neighbor.

E-mail: founder@logarcheon.com

Logarchéon AI Manifesto

Hazing says: “I did it, so you must.”
Logarchéon chooses stewardship: “I did it, so you don’t have to.”

1. From hazing to stewardship

Hazing ethics protects status by preserving pain: “I suffered under this system; your suffering is what validates me.” Stewardship ethics protects people: “If I have already paid a high cost to learn something hard, my duty is to lower that cost for others—not to keep them small so I feel big.” Logarchéon is built explicitly on stewardship, not hazing.

2. Equal dignity versus pyramids of privilege and luck

Every human person has equal dignity—whether one speaks in the language of human rights or of imago Dei. No one is given a higher-grade soul because of passport, accent, bloodline, or accumulated privilege and luck. When AI allows a truck driver, migrant, or single parent with a local model, basic coding skills, and a good workflow to match the work of an “elite,” that is not a moral failure of AI; it is a moral correction of a rigged game.

Vocation If one takes seriously tuitio fidei et obsequium pauperum—the defense of the Faith and service of the poor and the sick—then the right use of AI is to amplify those at the bottom of the pyramid, not to preserve comfort for those already on top.

3. Outer cortex, not distant oracle

Advanced models are treated as an outer cortex: a silicon extension of thought and language, not an idol. The question is not whether such outer cortices will exist, but who they will serve. Logarchéon’s answer is simple: your outer cortex should live on your machines, under your keys, in service of your conscience, vocation, and community—not as a remote service reading you from above.

4. Tools are not cheating. Dishonesty is.

There are narrow, explicit contexts—exams, sacraments, sworn testimony, specific rituals—where “unaided work” is required. In those places, secretly delegating to a model is cheating because it breaks a clear promise. Outside those contexts, a blanket ban on AI is hazing, not ethics. The meaningful question becomes: not “Did you touch AI?” but “How did you design, steer, and check your stack—and do you stand behind the result?”

5. Coding and machine’s-eye literacy for everyone

If you cannot speak the machine’s language, the machine will be used on you, not for you. Basic coding and systems thinking are treated as a new civic literacy: understanding state and feedback, knowing how prompts and APIs shape behavior, and being able to script small agents and pipelines. This is how workers, migrants, parishes, and clinics rule their own machines, instead of being ruled by opaque platforms.

When a model is trained or fine-tuned on your corpus, runs on your hardware or trusted enclave, and is steered and reviewed by you, its output is not a foreign ghostwriter; it is your outer cortex. Using it is not plagiarism; it is thinking with your extended mind.

6. Commitments

  • Stewardship over hazing. “I did the hard work, so you can start higher and go further.”
  • Private, accountable outer cortex. Local or λ-secure models where feasible; clear logs and authorship metadata; no mind-mining.
  • Bias toward the bottom of the pyramid. Priority for workers, migrants, patients, small parishes, and solo founders. If the tools only help those already on top, the design has failed.

In an age of hybrid minds, no one should be shamed for using an outer cortex, and no one should be denied one because they were born on the wrong side of privilege and luck.

The complete manifesto: Read the complete Logarchéon AI Manifesto (PDF) .

Patrons & influences

Gratitude for the saints whose lives and writings shape my work and prayer:

  • St. John the Baptist (1st c. BC) — witness, repentance, and preparation; at the Visitation (Lk 1:39–45) he already rejoices before Christ hidden in Mary, the New Ark of the Covenant who carries the Word made flesh, the true Bread of Life, and the eternal High Priest. His leap in the womb echoes David dancing before the Ark (2 Sam 6), making him the first prophet to recognize and rejoice before the living Presence.
  • St. Matthew the Apostle (1st c. AD) — the Gospel of mercy, especially Matthew 25, grounding service to “our Lords the sick.”
  • Blessed Fra’ Gerard (11th c.) — humble care for the sick and poor; founder of the Jerusalem hospital that became the Order’s spiritual root.
  • St. Bernard of Clairvaux (1090) — stability, charity, and interior stillness; his Sermons on the Song of Songs, De Diligendo Deo (On Loving God), De laude novae militiae, and especially De Gradibus Humilitatis et Superbiae (On the Steps of Humility and Pride).
  • St. Thomas Aquinas (1225) — clarity of reason ordered to truth.
  • St. Ignatius of Loyola (1491) — discernment, disciplined service, and formation of conscience.
  • St. Teresa of Ávila (1515) — friendship with Christ in prayer and action.
  • St. John of the Cross (1542) — the purifying path to union with God.
  • St. John Bosco (1815) — forming the young through reason, faith, and patient kindness.
  • Blessed Michael McGivney (1852) — priestly charity and protection of families.
  • St. Josemaría Escrivá (1902) — sanctifying ordinary work and study.

Daily Prayer

Lord Jesus, thou hast seen fit to enlist me for thy service among the Knights and Dames of Saint John of Jerusalem. I humbly entreat thee, through the intercession of the Most Holy Virgin of Philermo, of Saint John the Baptist, Blessed Gerard, and all the Saints and blesseds of our Order, to keep me faithful to the traditions of our Order.

Be it mine to practice and defend the Catholic, the Apostolic, the Roman faith against the enemies of religion; be it mine to practice charity towards my neighbors, especially the poor and the sick.

Give me the strength I need to carry out this my resolve, forgetful of myself, learning ever from the Holy Gospel a spirit of deep and generous Christian devotion, striving ever to promote God’s glory, the world’s peace, and all that may benefit the Order of Saint John of Jerusalem. Amen.

Ethos & scope

This site presents personal research and vocation-aligned work. It does not use or trade on any religious order’s name, logo, or endorsement, and it offers no goods or services under such names. Views and materials here are my own.

Contact