Multi-agent AI systems make thousands of decisions. Akashi makes them visible, auditable, and coordinated — so agents build on each other's work instead of contradicting it.
A planner recommends microservices. A coder builds a monolith. Neither knows the other already decided. The conflict surfaces as a production bug, not a design discussion.
Agents relitigate settled questions on every session. There's no shared memory of what's already been decided, why, and what alternatives were considered and rejected.
When something goes wrong, nobody can answer: who decided what, when, why, and what alternatives were considered? Compliance asks. You have nothing.
Before making a decision, an agent queries the audit trail. Akashi returns the most relevant past decisions — re-ranked by assessment outcomes, citation count, and recency — plus any active conflicts involving those precedents.
After deciding, an agent records its full reasoning. Five things happen atomically: embeddings are computed, a completeness score is assigned, everything commits to the database, conflict detection runs asynchronously, and subscribers are notified.
Conflicts are detected semantically. A planner recommending microservices and a coder recommending a monolith for the same system will surface as a conflict — regardless of what decision_type either agent used.
Pairs above the significance threshold are validated by an LLM, which classifies the relationship as contradiction, supersession, complementary, refinement, or unrelated. Only genuine conflicts are stored.
# Everything in Docker: TimescaleDB, Qdrant, Ollama, Akashi server docker compose -f docker-compose.complete.yml up -d # First run downloads two Ollama models (~7GB). Watch progress: docker compose -f docker-compose.complete.yml logs -f ollama-init # Server ready when you see a "listening" log line curl http://localhost:8080/health # Open http://localhost:8080 for the audit dashboard
Full feature set out of the box: LLM conflict validation, vector search, real-time SSE, multi-tenancy, audit dashboard. No API keys or external accounts needed.
# Bring your own cloud: TimescaleDB + Qdrant (Postgres-only also works) cp docker/env.example .env # Edit .env — minimum required: DATABASE_URL=postgres://user:pass@host:5432/akashi AKASHI_ADMIN_API_KEY=your-api-key docker compose up -d # Generate persistent JWT signing keys (run once) go run ./scripts/genkey -out data/ # Add to .env: AKASHI_JWT_PRIVATE_KEY=/data/jwt_private.pem AKASHI_JWT_PUBLIC_KEY=/data/jwt_public.pem
Run Akashi on any cloud that can serve a Docker container and a Postgres database. Qdrant and Ollama are optional — the server falls back to text search without them. See the self-hosting guide.
# Claude Code — never expires, survives server restarts claude mcp add --transport http --scope user akashi http://localhost:8080/mcp \ --header "Authorization: ApiKey admin:$AKASHI_ADMIN_API_KEY"
Semantic precedent lookup before deciding. Returns relevant past decisions, active conflicts, and winning approaches from resolved conflicts.
Record a decision with reasoning, alternatives, evidence, and confidence. Triggers embedding, conflict detection, and SSE notification.
Mark a past decision as correct, incorrect, or partially correct. Assessments feed back into search re-ranking.
Filter decisions by type, agent, confidence range, or free-text semantic search. Structured or unstructured.
List grouped conflicts between agents. Filter by severity, category, and status. Returns representative examples per conflict group.
Aggregate health metrics: decision volume, conflict rate, mean confidence, and assessment outcomes across the org.
Every decision trace records the full context at the moment of decision — not a summary reconstructed later.
What was chosen, expressed as a concrete outcome, with a 0–1 confidence score from the agent.
Step-by-step logic explaining why this outcome was chosen over alternatives.
Every option considered — with scores, rationale, and why each was not selected.
What information backed the decision: URLs, analysis, observations — with source type and provenance.
Semantically detected disagreements between agents on the same question, with severity and lifecycle state.
SHA-256 content hash and Merkle tree batch verification. Tamper detection without external infrastructure.
Both business time (valid_from/valid_to) and transaction time — point-in-time queries and full history.
Session ID, tool, model, and repo context — automatically enriched from MCP session and HTTP headers.
Idiomatic Go client with token management, all six operations, and testable interfaces. Used in the Akashi server itself.
View source →Sync and async clients. Includes AkashiCallbackHandler for LangChain and a CrewAI hooks adapter.
Full TypeScript types. Includes createAkashiMiddleware for the Vercel AI SDK.