Common issues and how to fix them. If you don't find your problem here, run steelspine doctor first — it auto-detects most issues.
setup.sh says "does not look like a complete SteelSpine installation"Your shell has a stale PRIME_HOME environment variable pointing somewhere else. Clear it and re-run:
unset PRIME_HOME
bash ~/.prime/setup.sh
steelspine: command not foundThe setup added PATH to ~/.bashrc but your current shell hasn't reloaded. Either:
source ~/.bashrc
…or open a new terminal.
steelspine commandbin/ scripts lost their executable bit. Fix:
chmod +x ~/.prime/bin/*
…or just re-run bash ~/.prime/setup.sh.
~/.bashrc and I don't want it toWe add two blocks: one for PRIME_HOME + PATH, and one for the optional memory-agent auto-start. Both are bracketed with # >>> SteelSpine / # <<< SteelSpine comments — remove those blocks to revert.
steelspine run my_command shows "command not found" but creates a run anywayThe wrapped command failed to launch (typo, missing binary). The run is still recorded so you can see the failure. The verdict logic doesn't fire on launch-failure — check the run's log file for the actual error:
cat ~/.prime/runs/run_NNNN/output.log
Python and some tools buffer stdout when stdout isn't a TTY. Force unbuffered output:
steelspine run python3 -u my_agent.py
# or
steelspine run stdbuf -o0 my_command
compare / diagnose / replay-runList what's actually captured:
steelspine run list
Run IDs are run_0001, run_0002, … — not arbitrary names.
steelspine run in multiple terminals — do they collide?No. Each invocation gets its own unique run_id and its own ~/.prime/runs/run_NNNN/ directory. The shared event log handles concurrent writers safely. To group related runs across terminals, use the --session NAME flag or set STEELSPINE_SESSION in each shell:
# Terminal 1
export STEELSPINE_SESSION=experiment-3
steelspine run python3 agent_v1.py
# Terminal 2
export STEELSPINE_SESSION=experiment-3
steelspine run python3 agent_v2.py
# Filter to that session:
steelspine run list --session experiment-3
That's intentional — SteelSpine checks runs/ usage against your storage_budget_mb setting after each capture. Amber notice at 80%, red at 90%+. To address:
steelspine storage auto # non-interactive: promote entities + prune oldest
steelspine storage # interactive wizard (5 options)
steelspine storage status # see breakdown without making changes
To raise the budget instead of pruning, edit ~/.prime/config.json and increase storage_budget_mb (default 500). The wizard's option 4 does this for you.
TAMPERED and you didn't touch anythingSome other process modified files in ~/.prime/runs/ or ~/.prime/sidecar/. Check:
ls -la ~/.prime/sidecar/log/
If the timestamps look wrong, restore from a backup or accept that run's audit is broken. Future runs sign cleanly.
CLEAN (self-signed — no org key) — what does that mean?You're using the auto-generated personal signing key. For multi-user / org deployments where you need a key non-owners can't forge, set:
export STEELSPINE_ORG_KEY=/path/to/org/signing.key
…or see the Compliance Guide → "Verification by an external auditor".
cp ~/.prime/.keys/signing.key ~/steelspine-signing.key.bak
chmod 600 ~/steelspine-signing.key.bak
If you lose the key, new runs sign with a new key (everything works) but you can't verify historical runs against the old key.
steelspine ui says port in use / opens nothingUI auto-picks a free port. If picking is failing, force a specific port:
steelspine ui --port 8910
To see what port it picked last time:
cat ~/.prime/state/prime_ui_port.txt 2>/dev/null
The UI server is on localhost. If you're running SteelSpine inside WSL or a container, the host browser may not see the right interface. Try:
steelspine ui --port 8910 --no-browser
# then in your host browser: http://localhost:8910
steelspine memory returns nothingThe memory agent isn't running. Start it:
steelspine memory-agent
The memory agent talks to a local LLM via Ollama by default (port 11434). Either install Ollama and pull a model:
curl -fsSL https://ollama.com/install.sh | sh
ollama pull qwen3:32b-q4_K_M
…or point at a different LLM endpoint in ~/.prime/config.json:
"upstream_llm_url": "http://your-llm-host:port"
steelspine status says an adapter is "stopped" after I started itCheck the adapter's run dir for stale lock files:
ls ~/.prime/adapters/<name>/run/
Remove stale .lock or .pid files older than a few minutes, then restart:
bash ~/.prime/adapters/<name>/bin/<name>_stop.sh
bash ~/.prime/adapters/<name>/bin/<name>_start.sh
Check current usage:
steelspine storage
Prune oldest runs (respects retention policy in config.json):
steelspine storage prune
Or move archive to a bigger volume — set archive_dir in ~/.prime/config.json to an external path.
steelspine doctor shows red ✗Most issues auto-fix:
steelspine doctor --fix
If events_file is missing on a fresh install, that's expected — it'll be created on your first steelspine run.
If sdk_importable fails, your bundle was modified or your Python is older than 3.8. Reinstall, or check python3 --version.
pip install steelspine_langchain-*.whl succeeds, but import steelspine_langchain failsThe wheel ships an obfuscated package alongside the pyarmor runtime. If you removed pyarmor_runtime_011698 from your venv, reinstall:
pip uninstall steelspine_langchain
pip install ~/.prime/packages/steelspine-langchain/dist/*.whl
inspect.getsource(SteelSpineCallbackHandler) raises OSError: could not find class definitionThis is intentional — the package is obfuscated. The class works at runtime; only source extraction is blocked.
Run:
steelspine doctor --json > diagnostics.json
…and email hello@steelspine.ai with that file (and the relevant ~/.prime/runs/run_NNNN/output.log) attached. Reading every reply personally — Jeremy.