One binary does not mean one deployment. The same 16 MB artifact deploys across isolated Kubernetes planes — giving you smaller blast radii than multi-service architectures.
Enterprise security teams often assume a single binary means everything fails together. The opposite is true.
The Key Insight
The binary is the constant. The deployment topology is the variable. A single binary deployed across four isolated node pools with distinct network policies gives you smaller blast radii than four different services with shared databases and porous network boundaries. Enterprise architects confuse artifact count with deployment topology — they are independent concerns.
“Everything in one binary means everything fails together.”
This conflates the build artifact with the runtime topology. Each ZenoAuth pod is deployed independently — a crash in the Auth Plane cannot propagate to the Admin or Break-Glass planes. They run on separate node pools, behind separate Envoy Gateway fleets, with separate Kubernetes namespaces and network policies.
“A vulnerability in auth means admin is compromised too.”
With interface splitting, an RCE in the auth flow gives the attacker access to one pod in the Auth Plane. NetworkPolicy blocks all egress except PostgreSQL. There is no network path to Admin or PIM planes — they are behind a separate Gateway that only accepts traffic from VPN CIDRs.
The real risks are operational, and they are eliminated.
IAM downtime comes from misconfigured network policies between microservices (eliminated: no east-west traffic), version skew between identity services (eliminated: single artifact), dependency failure like Redis or Infinispan (eliminated: PostgreSQL only), and slow coordinated rollouts (eliminated: one Deployment per plane).
Every advantage flows from having one artifact, one build pipeline, one version.
kubectl rollout undoThe same container image, deployed four ways. Route exposure is controlled at the infrastructure layer — not by compiling different binaries.
Powered by Kubernetes Gateway API with Envoy Gateway as the data-plane implementation.
Every pod runs the identical binary. The infrastructure decides what each pod serves.
The highest-throughput plane. Handles every OAuth authorize, token,
introspect, and userinfo request. Horizontally scalable — adding pods is purely horizontal since there is no inter-service east-west traffic.
Where operators manage users, rotate keys, configure RBAC, and review audit logs. Isolated behind a VPN-only Gateway — a compromise of the public Auth Plane does not grant access to administrative operations.
Handles just-in-time access elevation — temporary admin privileges with approval workflows and time-bounded SD-JWT credentials. Low-volume but high-sensitivity. Can run on hardened nodes with additional OS-level controls.
The emergency access path when normal authentication is unavailable. Operates independently on its own Gateway and node pool. Remains operational even if Auth, Admin, and PIM planes are all down.
The Kubernetes Gateway API's role-oriented model maps naturally to interface splitting.
Each team owns their layer. Separation of concerns is enforced by the API itself — not by convention or documentation.
Each Gateway provisions its own Envoy deployment. A crash or resource exhaustion in the public Envoy cannot affect the internal or break-glass listeners.
Physical Isolation Layers
Compromise of any single plane gives the attacker exactly this — and no more.
NetworkPolicy blocks all pod-to-pod traffic between namespaces
Each Gateway runs its own Envoy fleet — no shared proxy
Minimal ServiceAccount RBAC, no cluster-level permissions
readOnlyRootFilesystem: true enforced on all pods
allowPrivilegeEscalation: false, all capabilities dropped, non-root UID
Multi-tenant isolation at the database layer. Queries are scoped to the requesting org.
sqlx verifies all queries against the schema at compile time. SQL injection is a compile error.
All database connections use sslmode=require. No plaintext wire protocol.
Signing keys stored with HKDF-derived encryption. Key material is never in plaintext.
Resource exhaustion on Auth Plane's pool cannot starve Break-Glass.
Rust's zero-cost abstractions mean dramatically lower resource requirements.
| Plane | CPU Request | Memory Request | Memory Limit |
|---|---|---|---|
| Auth | 250m | 64 Mi | 256 Mi |
| Admin | 100m | 48 Mi | 128 Mi |
| PIM | 100m | 48 Mi | 128 Mi |
| Break-Glass | 100m | 48 Mi | 128 Mi |
A single pod uses ~50 MB at steady state. Compare to Keycloak's 512 MB minimum heap.
| Throughput | Auth Pods | Admin | Total Memory |
|---|---|---|---|
| 100 req/s | 2 | 2 | ~0.5 GB |
| 1,000 req/s | 5 | 2 | ~1.3 GB |
| 10,000 req/s | 15 | 3 | ~3 GB |
| 50,000+ req/s | 20+ (HPA) | 3 | ~5 GB |
An equivalent Keycloak HA deployment at 1,000 req/s requires 12+ GB of memory.
How ZenoAuth's interface-split deployment compares to alternatives.
kubectl rollout undoStart simple. Graduate to interface splitting when your security posture requires it.
Three identical pods behind a single Gateway. Zero-downtime rolling updates, automatic key rotation, PostgreSQL advisory locks for distributed jobs.
Same binary, same image tag, deployed across separate namespaces and node pools. Each plane gets its own Gateway, SecurityPolicy, and network boundary.
What changes (the binary does not)
Read the full deployment design document or explore the technical architecture.