---
name: cerver
description: Use this skill whenever you need shared memory across agent runs (recall what you or sibling agents did before), a sandboxed compute to run code on, or a secret/API key. Cerver is one API for all three — every session is also memory; computes are pluggable; secrets are pluggable.
---

# Cerver

Cerver gives agents three things on one API:

1. **Memory** — every session keeps its full transcript; sibling agents on the same account can read it.
2. **Compute** — sandboxes (local relay, Vercel, e2b) you can spawn code on. *Optional* per session.
3. **Secrets** — uniform `secret_fetch(name)` over your chosen backend (env, Infisical, …).

### Session model

A session has three independent axes:

- **transcript** — always on (this is what cerver *is for*)
- **harness** — which LLM/CLI drives the conversation: `claude` | `codex` | `grok` | `anthropic` | `openai` | `xai`
- **compute** — where work runs: `e2b` | `vercel` | `cloudflare` | `<registered_compute_id>` | `none`

`compute = none` (a.k.a. `session_type:"transcript"`) means cerver is just a transcript inbox — the caller drives the LLM themselves and POSTs turns back. Used by chat surfaces that don't need a sandbox.

Auth: every HTTP call carries `Authorization: Bearer $CERVER_API_TOKEN`.
Base URL: `https://gateway.cerver.ai`.

## When to invoke this skill

- User asks "what did the cron do yesterday" / "what did agent X decide last week" → list + peek sessions
- You need an API key, token, or credential → `secret_fetch(name)` first; env fallback only if no MCP
- You need to run shell / node / python in isolation → POST /v2/sessions, then /run
- User says "continue our last conversation" → resume an idle session (same id, append-only)

## Your tools

If `cerver-mcp` is configured (look for `mcp__cerver__*` in your tool list):

- `cerver_session_list({ status?, limit? })` — discover sessions on the account
- `cerver_session_peek({ session_id, last_n? })` — read last N turns of one
- `cerver_session_export({ session_id })` — full transcript as text
- `secret_fetch({ name })` — fetch a secret via the configured backend (env or infisical)

Without MCP, use plain HTTP — same data, more boilerplate:

```
GET  /v2/sessions?limit=20                  → list sessions
GET  /v2/sessions/:id                       → full session record incl. transcript
POST /v2/sessions  { task, workload, requirements }
POST /v2/sessions/:id/run         { code }
POST /v2/sessions/:id/run/stream  { code }  → SSE
POST /v2/sessions/:id/input       { content, role }
POST /v2/sessions/:id/resume                → re-attach compute, status → running
DELETE /v2/sessions/:id                     → terminate
```

## Status enum

Three values: `running` | `idle` | `ended`.

- `running` — agent process is up
- `idle` — no agent process, but session is resumable
- `ended` — explicitly closed; will not resume

If a session ended due to failure, check `endReason` for the cause.

## Common patterns

### Recall what happened

```
1. cerver_session_list({ limit: 20 })
   → scan names + statuses
2. Pick relevant by name + recency
3. cerver_session_peek({ session_id, last_n: 10 }) for cheap context,
   or cerver_session_export({ session_id }) for full history
4. Synthesize before answering the user
```

### Fetch a secret

```
1. secret_fetch({ name: "BUFFER_API_KEY" }) → { name, value, source }
2. Use value in your API call
3. NEVER log or echo the value
```

### Run code in a sandbox

```
1. POST /v2/sessions
   {
     task: "Boot a preview and run the smoke test",
     workload: "preview",
     requirements: { runtime: "node", package_install: true, timeout_minutes: 15 }
   }
2. POST /v2/sessions/:id/run/stream  { code: "npm test" }   ← SSE
3. DELETE /v2/sessions/:id           ← when done
```

### Hold a transcript without compute

For chat surfaces where work runs *outside* cerver — caller drives the LLM
themselves and just wants cerver to remember the conversation.

```
1. POST /v2/sessions
   {
     session_type: "transcript",
     session_name: "company-chat",
     harness: "claude"
   }
2. (caller hits Anthropic / OpenAI / xAI directly, gets response)
3. POST /v2/sessions/:id/transcript
   {
     entries: [
       { at, role: "user",      kind: "text", content: "hello" },
       { at, role: "assistant", kind: "text", content: "hi back" }
     ]
   }
```

Returned session has `provider:"none"`, `compute_id:null`, `sandbox_id:null`,
plus the chosen `harness`. No /run, no /resume — there's no agent process.

### Continue a paused conversation

```
1. cerver_session_list({ status: "idle" })  → find the right one
2. POST /v2/sessions/:id/resume     → same id, status → running, transcript intact
3. POST /v2/sessions/:id/input      { content: "follow-up question", role: "user" }
```

## Don't

- Don't bake secrets into your output, commits, or chat. Always fetch fresh.
- Don't create a new session when you could resume an idle one — sessions ARE memory.
- Don't poll for output; use `/run/stream` (SSE).
- Don't walk `previousSessionId` chains — sessions are append-only since the resume rewrite. One session id = one full conversation.

## Helpful URLs

- Full API reference: <https://cerver.ai/dashboard#api>
- Live session dashboard (humans): <https://cerver.ai/dashboard#sessions>
- Cross-agent memory docs: <https://cerver.ai/llms.txt>
