Managed infrastructure for running untrusted code. You build the product. We handle isolation, scaling, and security.
from cerver import Sandbox from openai import OpenAI client = OpenAI() sandbox = Sandbox() def handle_message(user_msg): response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": user_msg}], tools=[{"type": "function", "function": { "name": "run_code", "parameters": {"code": {"type": "string"}} }}] ) code = response.choices[0].message.tool_calls[0] result = sandbox.run(code) return result.output
User: Analyze the sentiment of our latest customer reviews Agent: I'll analyze that for you. ▸ Running code... Agent: Found 847 reviews. 72% positive, 18% neutral, 10% negative. Top keywords: "fast", "reliable", "easy"
Sandboxes boot in ~150ms. No waiting. Agents run immediately.
Each sandbox is a separate microVM. Network controls, resource limits, no shared state.
Python, Node, Go, Rust. Full Linux. Install packages, run binaries, access filesystems.
Billed for actual execution time. No idle costs. Scales to zero automatically.
Logs, stdout/stderr, execution traces. Know what your agents are doing.
Python SDK and REST API. Create, execute, terminate. Three methods.
pip install cerver
One API call spins up an isolated Linux environment.
Run any code your AI agent generates. Safely.
Stdout, stderr, exit codes. Sandbox auto-terminates.
Run thousands of sandboxes in parallel. We handle the infra.
We're onboarding teams building AI products.
Thanks! We'll be in touch soon.