hexr.guard integrates LLM Guard into your agent’s request pipeline to detect prompt injection attempts, secret leakage, invisible Unicode characters, and harmful content — both in prompts you send and responses you receive. When you use hexr_llm() with HEXR_LLM_GUARD_ENABLED=true, scanning happens automatically without any code changes. You can also call the scanning functions directly for custom workflows.
Quick start
API
scan_prompt()
scan_output()
Async versions
Utility functions
Scanners
| Scanner | Detects | Default threshold |
|---|---|---|
| PromptInjection | Attempts to override system instructions | 0.5 |
| Secrets | API keys, tokens, and passwords in prompts | N/A (pattern match) |
| InvisibleText | Hidden Unicode characters that alter LLM behavior | N/A (pattern match) |
| Toxicity | Harmful, offensive, or inappropriate content | 0.7 |
| Relevance | Off-topic responses that don’t match the prompt | 0.5 |
Automatic integration
WhenHEXR_LLM_GUARD_ENABLED=true, hexr_llm() automatically scans prompts before sending and responses after receiving — no code changes needed:
hexr_llm() calls work without modification.
OWASP Top 10 for LLM applications
LLM Guard addresses several risks from the OWASP Top 10 for LLM Applications:| OWASP risk | Guard scanner | Coverage |
|---|---|---|
| LLM01: Prompt Injection | PromptInjection | Direct and indirect injection detection |
| LLM02: Insecure Output Handling | Output scanning | Detects code injection in responses |
| LLM06: Sensitive Information Disclosure | Secrets | Detects leaked API keys, tokens, and PII |
| LLM09: Overreliance | Relevance | Flags off-topic or hallucinated responses |