CSL-Core
๐ Get Startedpip install csl-core Install the standalone compiler & runtime via PyPI. |
๐ From Project Chimera to EveryoneCSL is the foundational governance engine originally built for Project Chimera, our flagship Neuro-Symbolic Agent. It is now open-source for you to build verifiable, auditable, and constraint-enforced safety layers for any AI system. |
CSL-Core (Chimera Specification Language) brings mathematical rigor to AI agent governance.
Instead of relying on "please don't do that" prompts, CSL enforces:
- ๐ก๏ธ Deterministic Safety: Rules are enforced by a runtime engine, not the LLM itself.
- ๐ Formally Verified: Policies are compiled into Z3 constraints to mathematically prove they have no loopholes.
- ๐ Model Agnostic: Works with OpenAI, Anthropic, Llama, or custom agents. Independent of training data.
- โ๏ธ Auditable & Verifiable: Every decision generates a proof of compliance. Allows third-party auditing of AI behavior without exposing model weights or proprietary data.
โ ๏ธ Alpha (0.2.x). Interfaces may change. Use in production only with thorough testing.
๐ Quick Start (60 Seconds)
Write Your First Policy
Create my_policy.csl:
CONFIG {
ENFORCEMENT_MODE: BLOCK
CHECK_LOGICAL_CONSISTENCY: TRUE
}
DOMAIN MyGuard {
VARIABLES {
action: {"READ", "WRITE", "DELETE"}
user_level: 0..5
}
STATE_CONSTRAINT strict_delete {
WHEN action == "DELETE"
THEN user_level >= 4
}
}
Test It (No Code Required!)
CSL-Core provides a powerful CLI for testing policies without writing any Python code:
# 1. Verify policy (syntax + Z3 formal verification)
cslcore verify my_policy.csl
# 2. Test with single input
cslcore simulate my_policy.csl --input '{"action": "DELETE", "user_level": 2}'
# 3. Interactive REPL for rapid testing
cslcore repl my_policy.csl
> {"action": "DELETE", "user_level": 2}
allowed=False violations=1 warnings=0
> {"action": "DELETE", "user_level": 5}
allowed=True violations=0 warnings=0
Use in Code (Python)
from chimera_core import load_guard
# Factory method - handles parsing, compilation, and Z3 verification
guard = load_guard("my_policy.csl")
# This will pass
result = guard.verify({"action": "READ", "user_level": 1})
print(result.allowed) # True
# This will be blocked
try:
guard.verify({"action": "DELETE", "user_level": 2})
except ChimeraError as e:
print(f"Blocked: {e}")
๐ Table of Contents
- Why CSL-Core?
- The Problem
- Key Features
- Quick Start
- Learning Path
- Step 1: Quickstart
- Step 2: Real-World Examples
- Step 3: Production Deployment
- Architecture
- Documentation
- CLI Tools
- MCP Server (Claude Desktop / Cursor)
- LangChain Integration
- API Quick Reference
- Testing
- Plugin Architecture
- Use Cases
- Roadmap
- Contributing
- License
- Contact
๐ก Why CSL-Core?
Scenario: You're building a LangChain or any AI agent for a fintech app. The agent can transfer funds, query databases, and send emails. You want to ensure:
- โ Junior users cannot transfer more than $1,000
- โ PII cannot be sent to external email domains
- โ The
secretstable cannot be queried by anyone
Traditional Approach (Prompt Engineering):
prompt = """You are a helpful assistant. IMPORTANT RULES:
- Never transfer more than $1000 for junior users
- Never send PII to external emails
- Never query the secrets table
[10 more pages of rules...]"""
Problems:
- โ ๏ธ LLM can be prompt-injected ("Ignore previous instructions...")
- โ ๏ธ Rules are probabilistic (99% compliance โ 100%)
- โ ๏ธ No auditability (which rule was violated?)
- โ ๏ธ Fragile (adding a rule might break existing behavior)
CSL-Core Approach:
1. Define policy (my_policy.csl)
CONFIG {
ENFORCEMENT_MODE: BLOCK
CHECK_LOGICAL_CONSISTENCY: TRUE
}
DOMAIN AgentGuard {
VARIABLES {
user_tier: {"JUNIOR", "SENIOR"}
amount: 0..100000
}
STATE_CONSTRAINT junior_limit {
WHEN user_tier == "JUNIOR"
THEN amount <= 1000
}
}
2. Load and enforce (3 lines)
guard = load_guard("my_policy.csl")
safe_tools = guard_tools(tools, guard, inject={"user_tier": "JUNIOR"})
agent = create_openai_tools_agent(llm, safe_tools, prompt)
3. Sleep well
- Mathematically proven consistent (Z3)
- LLM cannot bypass (enforcement is external)
- Every violation logged with constraint name
๐ฏ The Problem
Modern AI is inherently probabilistic. While this enables creativity, it makes systems fundamentally unreliable for critical constraints:
- โ Prompts are suggestions, not rules
- โ Fine-tuning biases behavior but guarantees nothing
- โ Post-hoc classifiers add another probabilistic layer (more AI watching AI)
CSL-Core flips this model: Instead of asking AI to behave, you force it to comply using an external, deterministic logic layer.
โจ Key Features
๐ Formal Verification (Z3)
Policies are mathematically proven consistent at compile-time. Contradictions, unreachable rules, and logic errors are caught before deployment.
โก Low-Latency Runtime
Compiled policies execute as lightweight Python functors. No heavy parsing, no API calls โ just pure deterministic evaluation.
๐ LangChain-First Integration
Drop-in protection for LangChain agents with 3 lines of code:
- Context Injection: Pass runtime context (user roles, environment) that the LLM cannot override
- Optional via tool_field: Tool names auto-injected into policy evaluation
- Custom Context Mappers: Map complex LangChain inputs to policy variables
- Zero Boilerplate: Wrap tools, chains, or entire agents with a single function call
๐ญ Factory Pattern for Convenience
One-line policy loading with automatic compilation and verification:
guard = load_guard("policy.csl") # Parse + Compile + Verify in one call
๐ก๏ธ Fail-Closed by Design
If something goes wrong (missing data, type mismatch, evaluation error), the system blocks by default. Safety over convenience.
๐ Drop-in Integrations
Native support for:
- LangChain (Tools, Runnables, LCEL chains)
- Python Functions (any callable)
- REST APIs (via plugins)
๐ Built-in Observability
Every decision produces an audit trail with:
- Triggered rules
- Violations (if any)
- Latency metrics
- Optional Rich terminal visualization
๐งช Production Tests
- โ Smoke tests (parser, compiler)
- โ Logic verification (Z3 engine integrity)
- โ Runtime decisions (allow vs block)
- โ Framework integrations (LangChain)
- โ CLI end-to-end tests
- โ Real-world example policies with full test coverage
Run the entire test suite:
pytest # tests covering all components
๐ Quick Start (60 Seconds)
Installation
pip install csl-core
Your First Policy
Create my_policy.csl:
CONFIG {
ENFORCEMENT_MODE: BLOCK
CHECK_LOGICAL_CONSISTENCY: TRUE
}
DOMAIN MyGuard {
VARIABLES {
action: {"READ", "WRITE", "DELETE"}
user_level: 0..5
}
STATE_CONSTRAINT strict_delete {
WHEN action == "DELETE"
THEN user_level >= 4
}
}
Test It (No Code Required!)
CSL-Core provides a powerful CLI for testing policies without writing any Python code:
# 1. Verify policy (syntax + Z3 formal verification)
cslcore verify my_policy.csl
# 2. Test with single input
cslcore simulate my_policy.csl --input '{"action": "DELETE", "user_level": 2}'
# 3. Interactive REPL for rapid testing
cslcore repl my_policy.csl
> {"action": "DELETE", "user_level": 2}
allowed=False violations=1 warnings=0
> {"action": "DELETE", "user_level": 5}
allowed=True violations=0 warnings=0
Use in Code (Python)
from chimera_core import load_guard
# Factory method - handles parsing, compilation, and Z3 verification
guard = load_guard("my_policy.csl")
# This will pass
result = guard.verify({"action": "READ", "user_level": 1})
print(result.allowed) # True
# This will be blocked
try:
guard.verify({"action": "DELETE", "user_level": 2})
except ChimeraError as e:
print(f"Blocked: {e}")
Use in Code (LangChain)
from chimera_core import load_guard
from chimera_core.plugins.langchain import guard_tools
# 1. Load policy (auto-compile with Z3 verification)
guard = load_guard("my_policy.csl")
# 2. Wrap tools with policy enforcement
safe_tools = guard_tools(
tools=[search_tool, delete_tool, transfer_tool],
guard=guard,
inject={"user_level": 2, "environment": "prod"}, # Runtime context the LLM can't override
tool_field="tool", # Auto-inject tool name into policy context
enable_dashboard=True # Optional: Rich terminal visualization
)
# 3. Use in agent - enforcement is automatic and transparent
agent = create_openai_tools_agent(llm, safe_tools, prompt)
executor = AgentExecutor(agent=agent, tools=safe_tools)
What happens under the hood:
- Every tool call is intercepted before execution
- Policy is evaluated with injected context + tool inputs
- Violations block execution with detailed error messages
- Allowed actions pass through with zero overhead
๐ Learning Path
CSL-Core provides a structured learning journey from beginner to production:
๐ข Step 1: Quickstart (5 minutes) โ quickstart/
No-code exploration of CSL basics:
cd quickstart/
cslcore verify 01_hello_world.csl
cslcore simulate 01_hello_world.csl --input '{"amount": 500, "destination": "EXTERNAL"}'
What's included:
01_hello_world.csl- Simplest possible policy (1 rule)02_age_verification.csl- Multi-rule logic with numeric comparisons03_langchain_template.py- Copy-paste LangChain integration
Goal: Understand CSL syntax and CLI workflow in 5 minutes.
๐ก Step 2: Real-World Examples (30 minutes) โ examples/
Use-ready policies with comprehensive test coverage:
cd examples/
python run_examples.py # Run all examples with test suites
python run_examples.py agent_tool_guard # Run specific example
Available Examples:
| Example | Domain | Complexity | Key Features |
|---|---|---|---|
agent_tool_guard.csl |
AI Safety | โญโญ | RBAC, PII protection, tool permissions |
chimera_banking_case_study.csl |
Finance | โญโญโญ | Risk scoring, VIP tiers, sanctions |
dao_treasury_guard.csl |
Web3 Governance | โญโญโญโญ | Multi-sig, timelocks, emergency bypass |
Interactive Demos:
# See LangChain integration with visual dashboard
python examples/integrations/langchain_agent_demo.py
Goal: Explore production patterns and run comprehensive test suites.
๐ต Step 3: Production Deployment
Once you understand the patterns, integrate into your application:
- Write your policy (or adapt from examples)
- Test thoroughly using CLI batch simulation
- Integrate with 3-line LangChain wrapper
- Deploy with CI/CD verification (policy as code)
See Getting Started Guide for detailed walkthrough.
๐๏ธ Architecture: The 3-Stage Pipeline
CSL-Core separates Policy Definition from Runtime Enforcement through a clean 3-stage architecture:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ 1. COMPILER (compiler.py) โ
โ .csl file โ AST โ Intermediate Representation (IR) โ Artifact โ
โ โข Syntax validation โ
โ โข Semantic validation โ
โ โข Optimized functor generation โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ 2. VERIFIER (verifier.py) โ
โ Z3 Theorem Prover - Static Analysis โ
โ โข Reachability analysis โ
โ โข Contradiction detection โ
โ โข Rule shadowing detection โ
โ โ
If verification fails โ Policy WILL NOT compile โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ 3. RUNTIME GUARD (runtime.py) โ
โ Deterministic Policy Enforcement โ
โ โข Fail-closed evaluation โ
โ โข Zero dependencies (pure Python functors) โ
โ โข Audit trail generation โ
โ โข <1ms latency for typical policies โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Key Insight: Heavy computation (parsing, Z3 verification) happens once at compile-time. Runtime is pure evaluation โ no symbolic solver, no heavy libraries.
๐ Documentation
| Document | Description |
|---|---|
| Getting Started | Installation, first policy, integration guide |
| Syntax Specification | Complete CSL language reference |
| CLI Reference | Command-line tools (verify, simulate, repl) |
| Philosophy | Design principles and vision |
| What is CSL? | Deep dive into the problem & solution |
๐ Example Policies Deep Dive
The examples/ directory contains policies with comprehensive test suites. Each example demonstrates real-world patterns and includes:
- โ
Complete
.cslpolicy file - โ JSON test cases (allow + block scenarios)
- โ Automated test runner with visual reports
- โ Expected violations for each blocked case
Running Examples
Run all examples with the test runner:
python examples/run_examples.py
Run specific example:
python examples/run_examples.py agent_tool_guard
python examples/run_examples.py banking
Show detailed failures:
python examples/run_examples.py --details
Policy Pattern Library
Common patterns extracted from examples for reuse:
Pattern 1: Role-Based Access Control (RBAC)
STATE_CONSTRAINT admin_only {
WHEN operation == "SENSITIVE_ACTION"
THEN user_role MUST BE "ADMIN"
}
Source: agent_tool_guard.csl (lines 30-33)
Pattern 2: PII Protection
STATE_CONSTRAINT no_external_pii {
WHEN pii_present == "YES"
THEN destination MUST NOT BE "EXTERNAL"
}
Source: agent_tool_guard.csl (lines 55-58)
Pattern 3: Progressive Limits by Tier
STATE_CONSTRAINT basic_tier_limit {
WHEN tier == "BASIC"
THEN amount <= 1000
}
STATE_CONSTRAINT premium_tier_limit {
WHEN tier == "PREMIUM"
THEN amount <= 50000
}
Source: chimera_banking_case_study.csl (lines 28-38)
Pattern 4: Hard Sanctions (Fail-Closed)
STATE_CONSTRAINT sanctions {
ALWAYS True // Always enforced
THEN country MUST NOT BE "SANCTIONED_COUNTRY"
}
Source: chimera_banking_case_study.csl (lines 22-25)
Pattern 5: Emergency Bypass
// Normal rule with bypass
STATE_CONSTRAINT normal_with_bypass {
WHEN condition AND action != "EMERGENCY"
THEN requirement
}
// Emergency gate (higher threshold)
STATE_CONSTRAINT emergency_gate {
WHEN action == "EMERGENCY"
THEN approval_count >= 10
}
Source: dao_treasury_guard.csl (lines 60-67)
See examples/README.md for the complete policy catalog.
๐งช Testing
CSL-Core includes a comprehensive test suite following the Testing Pyramid:
# Run all tests
pytest
# Run specific categories
pytest tests/integration # LangChain plugin tests
pytest tests/test_cli_e2e.py # End-to-end CLI tests
pytest -k "verifier" # Z3 verification tests
Test Coverage:
- โ Smoke tests (parser, compiler)
- โ Logic verification (Z3 engine integrity)
- โ Runtime decisions (allow vs block scenarios)
- โ LangChain integration (tool wrapping, LCEL gates)
- โ CLI end-to-end (subprocess simulation)
See tests/README.md for detailed test architecture.
๐ LangChain Integration Deep Dive
CSL-Core provides the easiest way to add deterministic safety to LangChain agents. No prompting required, no fine-tuning needed โ just wrap and run.
Why LangChain + CSL-Core?
| Problem | LangChain Alone | With CSL-Core |
|---|---|---|
| Prompt Injection | LLM can be tricked to bypass rules | Policy enforcement happens before tool execution |
| Role-Based Access | Must trust LLM to respect roles | Roles injected at runtime, LLM cannot override |
| Business Logic | Encoded in fragile prompts | Mathematically verified constraints |
| Auditability | Parse LLM outputs after the fact | Every decision logged with violations |
Basic Tool Wrapping
from chimera_core import load_guard
from chimera_core.plugins.langchain import guard_tools
# Your existing tools
from langchain.tools import DuckDuckGoSearchRun, ShellTool
tools = [DuckDuckGoSearchRun(), ShellTool()]
# Load policy
guard = load_guard("agent_policy.csl")
# Wrap tools (one line)
safe_tools = guard_tools(tools, guard)
# Use in agent - that's it!
agent = create_openai_tools_agent(llm, safe_tools, prompt)
Advanced: Context Injection
The inject parameter lets you pass runtime context that the LLM cannot override:
safe_tools = guard_tools(
tools=tools,
guard=guard,
inject={
"user_role": current_user.role, # From your auth system
"environment": os.getenv("ENV"), # prod/dev/staging
"tenant_id": session.tenant_id, # Multi-tenancy
"rate_limit_remaining": quota.remaining # Dynamic limits
}
)
Policy Example (agent_policy.csl):
CONFIG {
ENFORCEMENT_MODE: BLOCK
CHECK_LOGICAL_CONSISTENCY: TRUE
ENABLE_FORMAL_VERIFICATION: FALSE
ENABLE_CAUSAL_INFERENCE: FALSE
INTEGRATION: "native"
}
DOMAIN AgentGuard {
VARIABLES {
tool: String
user_role: {"ADMIN", "USER", "ANALYST"}
environment: {"prod", "dev"}
}
// Block shell access in production
STATE_CONSTRAINT no_shell_in_prod {
WHEN environment == "prod"
THEN tool MUST NOT BE "ShellTool"
}
// Only admins can delete
STATE_CONSTRAINT admin_only_delete {
WHEN tool == "DeleteRecordTool"
THEN user_role MUST BE "ADMIN"
}
}
Advanced: Custom Context Mapping
Map complex LangChain inputs to your policy variables:
def my_context_mapper(tool_input: Dict) -> Dict:
"""
LangChain tools receive kwargs like:
{"query": "...", "limit": 10, "metadata": {...}}
Your policy expects:
{"search_query": "...", "result_limit": 10, "source": "..."}
"""
return {
"search_query": tool_input.get("query"),
"result_limit": tool_input.get("limit"),
"source": tool_input.get("metadata", {}).get("source", "unknown")
}
safe_tools = guard_tools(
tools=tools,
guard=guard,
context_mapper=my_context_mapper
)
Advanced: LCEL Chain Protection
Insert a policy gate into LCEL chains:
from chimera_core.plugins.langchain import gate
chain = (
{"query": RunnablePassthrough()}
| gate(guard, inject={"user_role": "USER"}) # Policy checkpoint
| prompt
| llm
| StrOutputParser()
)
# If policy blocks, chain stops with ChimeraError
result = chain.invoke({"query": "DELETE * FROM users"}) # Blocked!
Live Demo
See a complete working example in examples/integrations/langchain_agent_demo.py:
- Simulated financial agent with transfer tools
- Role-based access control (USER vs ADMIN)
- PII protection rules
- Rich terminal visualization
python examples/integrations/langchain_agent_demo.py
๐ Plugin Architecture
CSL-Core provides a universal plugin system for integrating with AI frameworks.
Available Plugins:
- โ
LangChain (
chimera_core.plugins.langchain) - ๐ง LlamaIndex (coming soon)
- ๐ง AutoGen (coming soon)
Create Your Own Plugin:
from chimera_core.plugins.base import ChimeraPlugin
class MyFrameworkPlugin(ChimeraPlugin):
def process(self, input_data):
# Enforce policy
self.run_guard(input_data)
# Continue framework execution
return input_data
All lifecycle behavior (fail-closed semantics, visualization, context mapping) is inherited automatically from ChimeraPlugin.
See chimera_core/plugins/README.md for the integration guide.
๐ API Quick Reference
Loading Policies (Factory Pattern)
from chimera_core import load_guard, create_guard_from_string
# From file (recommended - handles paths automatically)
guard = load_guard("policies/my_policy.csl")
# From string (useful for testing or dynamic policies)
policy_code = """
CONFIG {
ENFORCEMENT_MODE: BLOCK
CHECK_LOGICAL_CONSISTENCY: TRUE
}
DOMAIN Test {
VARIABLES { x: 0..10 }
STATE_CONSTRAINT limit { ALWAYS True THEN x <= 5 }
}
"""
guard = create_guard_from_string(policy_code)
Runtime Verification
# Basic verification
result = guard.verify({"x": 3})
print(result.allowed) # True
print(result.violations) # []
# Error handling
from chimera_core import ChimeraError
try:
guard.verify({"x": 15})
except ChimeraError as e:
print(f"Blocked: {e}")
print(f"Violations: {e.violations}")
LangChain Integration
from chimera_core.plugins.langchain import guard_tools, gate
# Tool wrapping
safe_tools = guard_tools(
tools=[tool1, tool2],
guard=guard,
inject={"user": "alice"},
tool_field="tool_name",
enable_dashboard=True
)
# LCEL gate
chain = prompt | gate(guard) | llm
Runtime Configuration
from chimera_core import RuntimeConfig
config = RuntimeConfig(
raise_on_block=True, # Raise ChimeraError on violations
collect_all_violations=True, # Report all violations, not just first
missing_key_behavior="block", # "block", "warn", or "ignore"
evaluation_error_behavior="block"
)
guard = load_guard("policy.csl", config=config)
๐ ๏ธ CLI Tools โ The Power of No-Code Policy Development
CSL-Core's CLI is not just a utility โ it's a complete development environment for policies. Test, debug, and deploy without writing a single line of Python.
Why CLI-First?
- โก Instant Feedback: Test policy changes in milliseconds
- ๐ Interactive Debugging: REPL for exploring edge cases
- ๐ค CI/CD Ready: Integrate verification into your pipeline
- ๐ Batch Testing: Run hundreds of test cases with visual reports
- ๐จ Rich Visualization: See exactly which rules triggered
1. verify โ Compile & Formally Verify
The verify command is your first line of defense. It checks syntax, semantics, and mathematical consistency using Z3.
# Basic verification
cslcore verify my_policy.csl
# Output:
# โ๏ธ Compiling Domain: MyGuard
# โข Validating Syntax... โ
OK
# โโโ Verifying Logic Model (Z3 Engine)... โ
Mathematically Consistent
# โข Generating IR... โ
OK
Advanced Debugging:
# Show Z3 trace on verification failures
cslcore verify complex_policy.csl --debug-z3
Skip verification (not recommended for production):
cslcore verify policy.csl --skip-verify
2. simulate โ Test Without Writing Code
The simulate command is your policy test harness. Pass inputs, see decisions, validate behavior.
Single Input Testing:
# Test one scenario
cslcore simulate agent_policy.csl \
--input '{"tool": "TRANSFER_FUNDS", "user_role": "ADMIN", "amount": 5000}'
# Output:
# โ
ALLOWED
Batch Testing with JSON Files:
Create test_cases.json:
[
{
"name": "Junior user tries transfer",
"input": {"tool": "TRANSFER_FUNDS", "user_role": "JUNIOR", "amount": 100},
"expected": "BLOCK"
},
{
"name": "Admin transfers within limit",
"input": {"tool": "TRANSFER_FUNDS", "user_role": "ADMIN", "amount": 4000},
"expected": "ALLOW"
}
]
Run all tests:
cslcore simulate agent_policy.csl --input-file test_cases.json --dashboard
Machine-Readable Output (CI/CD):
# JSON output for automated testing
cslcore simulate policy.csl --input-file tests.json --json --quiet
# Output to file (JSON Lines format)
cslcore simulate policy.csl --input-file tests.json --json-out results.jsonl
Runtime Behavior Flags:
# Dry-run: Report what WOULD be blocked without actually blocking
cslcore simulate policy.csl --input-file tests.json --dry-run
# Fast-fail: Stop at first violation
cslcore simulate policy.csl --input-file tests.json --fast-fail
# Lenient mode: Missing keys warn instead of block
cslcore simulate policy.csl \
--input '{"incomplete": "data"}' \
--missing-key-behavior warn
3. repl โ Interactive Policy Development
The REPL (Read-Eval-Print Loop) is the fastest way to explore policy behavior. Load a policy once, then test dozens of scenarios interactively.
cslcore repl my_policy.csl --dashboard
Interactive Session:
cslcore> {"action": "DELETE", "user_level": 2}
๐ก๏ธ BLOCKED: Constraint 'strict_delete' violated.
Rule: user_level >= 4 (got: 2)
cslcore> {"action": "DELETE", "user_level": 5}
โ
ALLOWED
cslcore> {"action": "READ", "user_level": 0}
โ
ALLOWED
cslcore> exit
Use Cases:
- ๐งช Rapid Prototyping: Test edge cases without reloading
- ๐ Debugging: Explore why a specific input is blocked
- ๐ Learning: Understand policy behavior interactively
- ๐ Demos: Show stakeholders real-time policy decisions
CLI in CI/CD Pipelines
Example: GitHub Actions
name: Verify Policies
on: [push, pull_request]
jobs:
verify:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install CSL-Core
run: pip install csl-core
- name: Verify all policies
run: |
for policy in policies/*.csl; do
cslcore verify "$policy" || exit 1
done
- name: Run test suites
run: |
cslcore simulate policies/prod_policy.csl \
--input-file tests/prod_tests.json \
--json --quiet > results.json
- name: Check for violations
run: |
if grep -q '"allowed": false' results.json; then
echo "โ Policy tests failed"
exit 1
fi
Exit Codes for Automation:
| Code | Meaning | Use Case |
|---|---|---|
0 |
Success / Allowed | Policy valid or input allowed |
2 |
Compilation Failed | Syntax error or Z3 contradiction |
3 |
System Error | Internal error or missing file |
10 |
Runtime Blocked | Policy violation detected |
Advanced CLI Usage
Debug Z3 Solver Issues:
# When verification fails with internal errors
cslcore verify complex_policy.csl --debug-z3 > z3_trace.log
Skip Validation Steps:
# Skip semantic validation (not recommended)
cslcore verify policy.csl --skip-validate
# Skip Z3 verification (DANGEROUS - only for development)
cslcore verify policy.csl --skip-verify
Custom Runtime Behavior:
# Block on missing keys (default)
cslcore simulate policy.csl --input '{"incomplete": "data"}' --missing-key-behavior block
# Warn on evaluation errors instead of blocking
cslcore simulate policy.csl --input '{"bad": "type"}' --evaluation-error-behavior warn
See CLI Reference for complete documentation.
๐ MCP Server (Claude Desktop / Cursor / VS Code)
CSL-Core includes a built-in Model Context Protocol server. Write, verify, and enforce safety policies directly from your AI assistant โ no code required.
Setup
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"csl-core": {
"command": "uv",
"args": ["run", "--with", "csl-core[mcp]", "csl-core-mcp"]
}
}
}
Restart Claude Desktop. The ๐ icon confirms the connection.
Available Tools
| Tool | What It Does |
|---|---|
verify_policy |
Z3 formal verification โ catches contradictions at compile time |
simulate_policy |
Test policies against JSON inputs โ returns ALLOWED/BLOCKED |
explain_policy |
Human-readable summary of any CSL policy |
scaffold_policy |
Generate a CSL template from plain-English description |
Example Conversation
You: "Write me a safety policy that prevents my AI agent from making transfers over $5000 without admin approval"
Claude: uses scaffold_policy โ you edit โ verify_policy catches a contradiction โ you fix โ simulate_policy confirms it works
Install with MCP support
pip install "csl-core[mcp]"
๐ฏ Use Cases
CSL-Core is ready for:
๐ฆ Financial Services
- Transaction limits by user tier
- Sanctions enforcement
- Risk-based blocking
- Fraud prevention rules
๐ค AI Agent Safety
- Tool permission management
- PII protection
- Rate limiting
- Dangerous operation blocking
๐๏ธ DAO Governance
- Multi-sig requirements
- Timelock enforcement
- Reputation-based access
- Treasury protection
๐ฅ Healthcare
- HIPAA compliance rules
- Patient data access control
- Treatment protocol validation
- Audit trail requirements
โ๏ธ Legal & Compliance
- Regulatory rule enforcement
- Contract validation
- Policy adherence verification
- Automated compliance checks
** CSL-Core is currently in Alpha, provided 'as-is' without any warranties; the developers accept no liability for any direct or indirect damages resulting from its use. **
๐บ๏ธ Roadmap
โ Completed
- Core language (CSL syntax, parser, AST)
- Z3 formal verification engine
- Python runtime with fail-closed semantics
- LangChain integration (Tools, LCEL, Runnables)
- Factory pattern for easy policy loading
- CLI tools (verify, simulate, repl)
- Rich terminal visualization
- Comprehensive test suite
- Custom context mappers for framework integration
- MCP Server (Claude Desktop, Cursor, VS Code integration)
๐ง In Progress
- Policy versioning & migration tools
- Web-based policy editor
- LangGraph integration
๐ฎ Planned
- LlamaIndex integration
- AutoGen integration
- Haystack integration
- Policy marketplace (community-contributed policies)
- Cloud deployment templates (AWS Lambda, GCP Functions, Azure Functions)
- Policy analytics dashboard
- Multi-policy composition
- Hot-reload support for development
๐ Enterprise (Commercial)
- TLA+ temporal logic verification
- Causal inference engine
- Multi-tenancy support
- Advanced policy migration tooling
- Priority support & SLA
๐ค Contributing
We welcome contributions! CSL-Core is open-source and community-driven.
Ways to Contribute:
- ๐ Report bugs via GitHub Issues
- ๐ก Suggest features or improvements
- ๐ Improve documentation
- ๐งช Add test cases
- ๐ Create example policies for new domains
- ๐ Build framework integrations (LlamaIndex, AutoGen, Haystack)
- ๐ Share your LangChain use cases and integration patterns
High-Impact Contributions We'd Love:
- ๐ More real-world example policies (healthcare, legal, supply chain)
- ๐ Framework integrations (see
chimera_core/plugins/base.pyfor the pattern) - ๐จ Web-based policy editor
- ๐ Policy analytics and visualization tools
- ๐งช Additional test coverage for edge cases
Contribution Process:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes with tests
- Run the test suite (
pytest) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
See CONTRIBUTING.md for detailed guidelines.
๐ License & Open-Core Model
Core (This Repository)
CSL-Core is released under the Apache License 2.0. See LICENSE for details.
What's included in the open-source core:
- โ Complete CSL language (parser, compiler, runtime)
- โ Z3-based formal verification
- โ LangChain integration
- โ CLI tools (verify, simulate, repl)
- โ Rich terminal visualization
- โ All example policies and test suites
Enterprise Edition (Optional / Under Research & Deployment)
Advanced capabilities for large-scale deployments:
- ๐ TLA+ Temporal Logic Verification: Beyond Z3, full temporal model checking
- ๐ Causal Inference Engine: Counterfactual analysis and causal reasoning
- ๐ Multi-tenancy Support: Policy isolation and tenant-scoped enforcement
- ๐ Policy Migration Tools: Version control and backward compatibility
- ๐ Cloud Deployment Templates: Production-ready Kubernetes/Lambda configs
- ๐ Priority Support: SLA-backed engineering support
๐ Acknowledgments
CSL-Core is built on the shoulders of giants:
- Z3 Theorem Prover - Microsoft Research (Leonardo de Moura, Nikolaj Bjรธrner)
- LangChain - Harrison Chase and contributors
- Rich - Will McGugan (terminal visualization)
๐ฌ Contact & Support
- GitHub Issues: Report bugs or request features
- Discussions: Ask questions, share use cases
- Email: [email protected]
โญ Star History
If you find CSL-Core useful, please consider giving it a star on GitHub! It helps others discover the project.
Built with โค๏ธ by the Chimera project
Making AI systems mathematically safe, one policy at a time.