The five layers at a glance
Layer 1 — Identity foundation
What you get: Every agent process automatically receives a cryptographic SPIFFE identity (X.509 + JWT). This is the trust root that makes everything else possible — no secrets, no API keys, just certificates.
Layer 2 — Observability
What you get: Full OpenTelemetry instrumentation for every agent operation — distributed traces, LLM token counts, tool call latency, and 42 pre-built Grafana dashboard panels — all with zero configuration.
Layer 3 — Platform services
What you get: Authenticated cloud access, agent-to-agent communication, secret storage, LLM prompt scanning, and sandboxed code execution — all available through the SDK, transparently secured by the identity layer.
Layer 4 — Developer experience
What you get: The Python SDK, CLI, and agent decorators you interact with daily.
@hexr_agent, hexr build, hexr push, and hexr deploy are your entry points to the whole platform.Each layer is independently scalable. Your agent code runs in isolated
tenant-{name} namespaces, while platform services run in the shared hexr-system namespace — your workloads never share infrastructure with other tenants.Layer 1: Identity foundation
Every protection in Hexr flows from this layer. It establishes cryptographic identity for every process, making impersonation and unauthorized access impossible without a valid certificate.What your agents get
- A unique SPIFFE ID per agent process (not just per pod)
- Short-lived X.509 certificates that rotate automatically every hour
- JWT tokens your agents use to exchange for cloud credentials — without any pre-shared secrets
SPIFFE ID format
Automatic registration
Any pod you deploy withhexr deploy is detected by the Auto-Registrar, which watches for pods with hexr.io/* labels and creates SPIRE entries automatically:
OIDC discovery
Hexr publishes a JWKS endpoint that AWS, GCP, and Azure trust natively. This is how your agents get real cloud credentials from JWT tokens — no long-lived keys stored anywhere.Layer 2: Observability
Every operation your agent performs emits OpenTelemetry data automatically. You don’t write any instrumentation code — the SDK handles it at decoration time.Telemetry sources
| Source | What it emits |
|---|---|
Python SDK (@hexr_agent, hexr_tool, hexr_llm) | Agent invocations, tool calls, LLM spans with token counts |
| Envoy proxies | mTLS connection metrics, TLS handshake latency |
| A2A sidecars | Task lifecycle events, message throughput |
What gets instrumented
| Decorator / call | Span name | Key attributes |
|---|---|---|
@hexr_agent | hexr.agent.invoke | duration, status, framework |
hexr_tool() | hexr.tool.invoke | service, region, cache tier hit |
hexr_llm() | hexr.llm.chat | model, tokens in/out, latency, cost |
| Credential cache | hexr.cache.lookup | L1/L2/L3 hit rates, latency |
A2AClient | hexr.a2a.send | target agent, task state, duration |
Layer 3: Platform services
These are the runtime services your agents call through the SDK. Envoy mTLS proxies protect all communication — there are no API keys between services.Service mesh
All traffic between your agent and platform services uses mutual TLS, authenticated by the SPIFFE certificates from Layer 1:| Your agent calls | Reaches |
|---|---|
hexr_tool("aws_s3") | Credential Injector — verifies identity, checks policy, calls AWS STS |
hexr.vault.get("my-secret") | Hexr Vault — AES-256-GCM encrypted, SPIFFE-scoped |
hexr.gateway.call("tool-name") | Hexr Gateway — MCP tool discovery and invocation |
hexr.sandbox.exec(code) | Sandbox — Firecracker microVM, hardware-level isolation |
SDK modules
| Module | Import | What it gives you |
|---|---|---|
| Core | from hexr import hexr_agent, hexr_tool, hexr_llm | Decorator, cloud tools, LLM proxy |
| Vault | import hexr.vault | SPIFFE-native secrets — no env vars |
| Gateway | import hexr.gateway | MCP tool discovery and invocation |
| Sandbox | import hexr.sandbox | Firecracker code execution |
| Browser | import hexr.browser | Headless Chromium in a microVM |
| Guard | import hexr.guard | LLM prompt and output scanning |
| A2A | from hexr.a2a import A2AClient | Agent-to-agent communication |
Layer 4: Developer experience
This is the layer you interact with. Three commands take your Python function from source to a fully secured, observable Kubernetes deployment:Layer 5: Management
The dashboard and REST API for operators and administrators.Dashboard pages
| Page | What you can do |
|---|---|
| Agents | View all deployed agents with live status, container health, and metrics |
| Identity graph | Explore all SPIFFE IDs and their trust relationships in a WebGL visualization |
| Traces | Browse distributed traces with full agent identity attribution |
| Policies | Create and update OPA authorization policies with progressive enforcement |
| Compliance | Track framework status — SOC 2, NIST, ISO, PCI, EU AI Act |
| Settings | Manage tenant configuration, API keys, and compute credits |