Skip to main content
Hexr is an identity-first runtime platform for AI agents. Add @hexr_agent to any Python function, run three commands, and your agent is running on Kubernetes with mutual TLS networking, SPIFFE cryptographic identity, authenticated cloud credentials, and OpenTelemetry tracing — none of which require any configuration from you.

From decorator to production in three commands

Write your agent

Use any Python framework — CrewAI, LangChain, OpenAI, or plain Python. Add one decorator.
from hexr import hexr_agent, hexr_tool, hexr_llm

@hexr_agent(name="research-analyst", tenant="acme-corp")
def analyze(topic: str):
    s3 = hexr_tool("aws_s3")            # authenticated — no keys in code
    client = hexr_llm(openai.OpenAI())   # auto-traced with cost attribution
    return client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": f"Analyze {topic}"}]
    ).choices[0].message.content

Build and push

The CLI scans your source code via AST, discovers every @hexr_agent, hexr_tool(), and hexr_llm() call, then generates a Dockerfile, Kubernetes manifests, and injects sidecar containers automatically.
hexr build   # AST discovery → Dockerfile + manifests + sidecar injection
hexr push    # Container image → registry

Deploy

One command schedules your agent as a Kubernetes Pod. Hexr creates a SPIFFE identity, provisions cloud access, starts mTLS, and begins streaming telemetry — all without any further configuration.
hexr deploy  # Kubernetes scheduling → SPIFFE registration → ready

What you get automatically

Every deployed agent receives all of the following with zero configuration:

Cryptographic identity

Every agent process gets a unique SPIFFE X.509 certificate — not just the container. Mutual TLS on every connection. Certificates rotate automatically.

Zero-secret cloud access

hexr_tool("aws_s3") returns an authenticated boto3 client. Your AWS, GCP, or Azure credentials are exchanged via SPIFFE JWT-SVIDs — no secrets in code or environment variables.

LLM observability

hexr_llm() wraps any LLM client with OpenTelemetry spans. Per-agent cost attribution, token counts, and latency histograms — all correlated by SPIFFE identity.

Zero-trust networking

An Envoy sidecar handles all traffic. Every connection is mutual TLS. OPA enforces policy at every service boundary. No plaintext, ever.

Agent-to-agent communication

Built-in A2A sidecar with JSON-RPC 2.0. Agents discover each other by name, delegate tasks, and stream results — all authenticated over mTLS.

Secure tool access

Import any OpenAPI spec as MCP tools. The Gateway injects credentials from Vault automatically. Code runs in isolated Firecracker microVMs.

Choose your deployment model

Hexr runs the same agent runtime everywhere — the only difference is who manages the infrastructure.

Hexr Cloud

Fully managed SaaS. Sign up, get an API key, and deploy agents in minutes. Pay-as-you-go with Hexr Compute Units (HCU).

Self-hosted

Deploy Hexr in your own infrastructure with Terraform and Helm. Air-gapped, on-premises, or private cloud. You own everything.

Hybrid Cloud (coming soon)

Your agents run in your infrastructure. Hexr manages the control plane. SPIFFE federation bridges the trust boundary.

Framework agnostic

Write agents with any Python framework. Hexr detects and adapts automatically.

CrewAI

Multi-agent crews with role-based agents.

LangChain

Chains, agents, and tools with LangGraph orchestration.

AutoGen

Multi-agent conversation patterns.

Strands Agents

AWS-native agent framework with tool decorators.

OpenAI Swarm

Lightweight multi-agent handoffs.

Pure Python

No framework needed. Just @hexr_agent and go.

Next steps

Architecture overview

Understand the 5-layer platform stack and how identity flows through every component.

Write your first agent

Step-by-step tutorial: from Python function to deployed, observable agent.