YOUR HARDWARE.
ONE SELF-HEALING CLUSTER.
FortrOS is a self-organizing operating system that runs at the hypervisor layer. Bring your own guest OS — Windows, Linux, whatever your users need — and FortrOS handles gossip-coordinated clustering, zero-knowledge storage, hardware-rooted identity, and split-tolerant state across every machine you own.
Your OS stack stays.
We run the iron underneath.
FortrOS runs at the hypervisor layer — same architectural tier as Proxmox, ESXi, or AWS EC2. Your existing OS deployments (Windows, Ubuntu, your managed Linux) run as guests and keep their own update cadences, EDR agents, and compliance controls.
That layer split is also the regulatory posture. Consumer-OS duties — age assurance, on-device verification, platform-content obligations — live with the guest OS vendor where they always have. You pick whichever guest your compliance team already clears. FortrOS isn't one.
Hypervisor tier
Same procurement bucket as Proxmox, ESXi, EC2. Bring-your- own-hardware compute fabric, nothing above the VM boundary.
Your existing stack
Windows, Ubuntu, RHEL, managed distros — whatever your users already run, with their existing tooling and agents intact.
Stay with the guest
Age-verification, platform-duty frameworks, and consumer-OS content regs apply where they always have — at the guest OS vendor, not the infrastructure.
Five procurement lines.
One operating system.
The typical stack for zero-trust, self-healing, multi-site VM orchestration involves five to seven separate products from five to seven separate vendors — each with their own auth, billing, SLA, and compliance posture. FortrOS collapses them into one Rust workspace with no external dependencies.
Built for regulated environments.
FortrOS's primitives map cleanly onto the controls compliance reviewers actually ask about. Hardware-rooted identity at the boot layer. Dual-control at the destructive-operation boundary. An immutable audit chain rooted in the same key material that authorises reads. Air-gap capable by construction, because the control plane is the org.
TPM NV + YubiKey / CAC / PIV
Permanent hardware identity unlocks the LUKS keyslot before any network touches the box. Smart-card auth for FIPS-shape environments; YubiKey for the rest.
K-of-N node verification, topology-spread
Destructive operations are verified by K of N org nodes, with signatures required from K distinct branches of the topology tree. A compromised node and its partitioned provisioning children can't muster enough cross-branch signatures to push changes through.
No external dependency
No phone-home, no SaaS in the hot path. The org is the control plane. Transport profiles for hostile networks: LAN, CDN-fronted, direct-origin, Tor.
Provisioning chain, recursive revoke
Every enrollment records which admin via which invite. Compromised node revocation walks the chain: any nodes that node provisioned inherit the revocation.
Gossip + CRDTs, no quorum
A WAN cut between facilities doesn't halt either side. Both partitions keep operating; state merges on reconnect. No consensus-halt failure mode to write into the runbook.
Rust, ~thousands of lines
No systemd, no containerd, no Python in the hot path. Each critical primitive is a few hundred lines of Rust in one workspace. A CISO audit is a tractable ask, not a multi-week OSS-pedigree review.
Three surfaces.
One design language.
From PXE first boot to admin reach-in, the operator sees a single, consistent visual system: IBM Plex set against deep navy, brass accents on active state, granite surfaces, corner-bracketed panels with catalog identifiers. Built like a piece of equipment, not a web app.
Bootstrapper
One-shot installer. PXE in, invite-signed, writes preboot.efi to ESP, reboots into the real boot path.
Preboot
Silent auto-boot when the org is reachable. Three-way recovery card when it isn't: connect, resume, or emergency boot.
Admin
One admin URL for every admin. Same login flow at the kiosk, on a laptop, or over ZTNA reach-in. YubiKey or CAC, session-certs, no ambient credentials.