researcher and writer each get separate cloud permissions, separate cost attribution, and separate audit logs, all without any changes to your framework code.
Supported frameworks
| Framework | Detection | Per-process identity |
|---|---|---|
| CrewAI | from crewai import ... | Each crew role gets a SPIFFE ID |
| LangChain | from langchain import ... | Agent chains get identity |
| AutoGen | from autogen import ... | Each AutoGen agent gets identity |
| Strands | from strands import ... | Agent strands get identity |
| OpenAI Swarm | from swarm import ... | Swarm agents get identity |
| Pure Python | No framework imports | Single identity per agent |
Steps
Choose your framework and write your agent
Use your preferred framework. Hexr wraps it with the
@hexr_agent decorator — no other changes needed.Build — framework is auto-detected
Run If auto-detection picks the wrong framework, override it with
hexr build and it will detect the framework from your imports:Expected output
--framework:Push and deploy
researcher→spiffe://hexr.cloud/agent/acme-corp/content-crew/researcherwriter→spiffe://hexr.cloud/agent/acme-corp/content-crew/writer
What just happened?
When you deployed the CrewAI crew, Hexr issued a unique SPIFFE identity to each role. Those identities control which cloud services each role can access, and every LLM call is attributed to the specific role that made it. You can view per-role token usage and costs in the Grafana dashboard.OPA policies enforce role-level cloud access scoping. A
writer role cannot access resources granted only to researcher, even within the same agent deployment.Next steps
Multi-cloud tools
Grant each framework role different AWS, GCP, or Azure permissions.
LLM observability
View per-role token usage, cost attribution, and latency in Grafana.
Agent-to-agent communication
Connect your CrewAI crew to other agents for task delegation.
SDK reference
Full API reference for the
@hexr_agent decorator and framework options.