Skip to main content
hexr_tool() returns an authenticated client for any supported cloud service. Instead of managing API keys or IAM credentials, it uses your agent’s SPIFFE identity to exchange for short-lived cloud credentials through a 3-tier cache. The returned client is identical to what you’d get from the provider’s own SDK — you use it exactly the same way, just without any credential setup.

Signature

hexr_tool(service_name: str, region: str | None = None, **kwargs) -> Any

Parameters

service_name
string
required
The cloud service to authenticate. Uses the format {provider}_{service}.Examples: "aws_s3", "gcp_bigquery", "azure_storage"
region
string
default:"None"
Override the default region for this service.Examples: "us-west-2", "europe-west1"

Returns

An authenticated client from the cloud provider’s SDK, ready to use:
ServiceReturns
aws_s3boto3.client('s3')
aws_ec2boto3.client('ec2')
aws_dynamodbboto3.resource('dynamodb')
aws_sqsboto3.client('sqs')
aws_lambdaboto3.client('lambda')
aws_bedrockboto3.client('bedrock-runtime')
gcp_bigquerygoogle.cloud.bigquery.Client()
gcp_storagegoogle.cloud.storage.Client()
gcp_vertexaigoogle.cloud.aiplatform client
gcp_pubsubgoogle.cloud.pubsub_v1.PublisherClient()
azure_storageazure.storage.blob.BlobServiceClient()
azure_cosmosdbazure.cosmos.CosmosClient()
azure_openaiopenai.AzureOpenAI()

Basic usage

from hexr import hexr_agent, hexr_tool

@hexr_agent(name="data-pipeline", tenant="acme-corp")
def process():
    # Returns authenticated boto3 S3 client
    s3 = hexr_tool("aws_s3")
    
    # Use it exactly like normal boto3
    response = s3.list_buckets()
    for bucket in response['Buckets']:
        print(bucket['Name'])
    
    # Upload a file
    s3.upload_file('data.csv', 'my-bucket', 'data.csv')
Output:
my-data-bucket
my-logs-bucket
my-models-bucket

How it works

Calling hexr_tool() triggers a credential resolution chain:
1

Your code calls hexr_tool()

s3 = hexr_tool("aws_s3") — the SDK starts the credential resolution chain.
2

L1 cache check (in-memory)

The SDK checks the in-memory L1 cache for existing credentials. On a miss, it proceeds to L2.
3

L2 cache check (Valkey)

The SDK checks the cluster-wide Valkey L2 cache. On a miss, it proceeds to credential exchange.
4

Credential exchange

The SDK sends the JWT-SVID to the Credential Injector, which calls AWS AssumeRoleWithWebIdentity. AWS STS returns {AccessKeyId, SecretKey, Token}.
5

Cache and return

Credentials are cached in both L1 (in-memory) and L2 (Valkey cluster-wide). Returns a configured boto3.client('s3', credentials=...) ready to use.
Your code → hexr_tool() → L1 cache → L2 Valkey → Credential Injector → AWS STS → Temp creds → boto3 client

Multi-cloud example

@hexr_agent(
    name="multi-cloud-analyst",
    tenant="acme-corp",
    resources=["aws_s3", "gcp_bigquery", "azure_storage"]
)
def cross_cloud_analysis():
    # Each call targets a different cloud provider's STS
    s3 = hexr_tool("aws_s3")           # → AWS STS
    bq = hexr_tool("gcp_bigquery")     # → GCP Workload Identity Federation
    blob = hexr_tool("azure_storage")  # → Azure Federated Token
    
    # Query BigQuery
    rows = bq.query("SELECT * FROM sales.transactions LIMIT 1000")
    
    # Store results in S3
    import json
    s3.put_object(
        Bucket="cross-cloud-results",
        Key="analysis.json",
        Body=json.dumps([dict(row) for row in rows])
    )
    
    return f"Processed {rows.total_rows} rows"

Region override

# Default region (from agent config or environment)
s3_default = hexr_tool("aws_s3")

# Specific region
s3_eu = hexr_tool("aws_s3", region="eu-west-1")
s3_ap = hexr_tool("aws_s3", region="ap-southeast-1")

OPA policy scoping

The resources parameter on @hexr_agent tells OPA which services this agent is allowed to access. Attempting to call a service not in the list will result in a CredentialError:
@hexr_agent(
    name="read-only-agent",
    tenant="acme-corp",
    resources=["aws_s3:read"]  # Only read access
)
def read_data():
    s3 = hexr_tool("aws_s3")     # ✅ Allowed
    ec2 = hexr_tool("aws_ec2")   # ❌ OPA denies — not in resources list

Error handling

from hexr import hexr_tool, CredentialError, AuthenticationError

try:
    s3 = hexr_tool("aws_s3")
except AuthenticationError:
    # SPIFFE identity not available (not running in a Hexr pod)
    print("Not running in a Hexr-managed environment")
except CredentialError as e:
    # Credential exchange failed (OPA denied, STS error, etc.)
    print(f"Credential exchange failed: {e}")

Observability

Every hexr_tool() call emits OpenTelemetry data automatically: Span: hexr.tool.invoke
Attributes:
  service: "aws_s3"
  region: "us-west-2"
  cache_tier: "L1" | "L2" | "L3"
  duration_ms: 0.001 | 2.3 | 150
Metrics:
  • hexr.tool.invocations — Counter by service
  • hexr.tool.duration — Histogram of call latency
  • hexr.cache.hits / hexr.cache.misses — Cache performance