Open source  ·  Apache 2.0  ·  Go + PostgreSQL

Version control
for AI decisions.

Multi-agent AI systems make thousands of decisions. Akashi makes them visible, auditable, and coordinated — so agents build on each other's work instead of contradicting it.

agent · decision workflow
# Before deciding, check for precedents akashi_check(query="database for session state")   has_precedent: true   decisions:     ✓ "use Redis for session state" (conf: 0.88)       agent: planner · 3 days ago       reasoning: "sub-ms reads, TTL native..." # Align with precedent and record akashi_trace(   outcome="use Redis, consistent with planner",   confidence=0.9, reasoning="..." )   ✓ recorded · no conflicts detected
01
The problem

Multi-agent AI has a
coordination problem.

01

Agents contradict each other

A planner recommends microservices. A coder builds a monolith. Neither knows the other already decided. The conflict surfaces as a production bug, not a design discussion.

02

Decisions evaporate

Agents relitigate settled questions on every session. There's no shared memory of what's already been decided, why, and what alternatives were considered and rejected.

03

No audit trail

When something goes wrong, nobody can answer: who decided what, when, why, and what alternatives were considered? Compliance asks. You have nothing.

02
How it works

Two primitives.
Every agent. Every decision.

akashi_check

Before making a decision, an agent queries the audit trail. Akashi returns the most relevant past decisions — re-ranked by assessment outcomes, citation count, and recency — plus any active conflicts involving those precedents.

  • Relevant precedents with full reasoning
  • Active conflicts in the decision area
  • Resolved conflicts and their winning approach
  • Precedent reference to cite in the trace
akashi_trace

After deciding, an agent records its full reasoning. Five things happen atomically: embeddings are computed, a completeness score is assigned, everything commits to the database, conflict detection runs asynchronously, and subscribers are notified.

  • Decision, confidence, and full reasoning
  • Rejected alternatives with scores
  • Supporting evidence with provenance
  • Integrity proof (SHA-256 + Merkle batch)
Agent decides
akashi_trace
Embeddings
Conflict scoring
LLM validation
SSE notify
03
Conflict detection

Semantic, not syntactic.

Conflicts are detected semantically. A planner recommending microservices and a coder recommending a monolith for the same system will surface as a conflict — regardless of what decision_type either agent used.

significance = topic_similarity × outcome_divergence × confidence_weight × temporal_decay
topic_similarity— semantic distance between decision embeddings
outcome_divergence— stance divergence on the conclusion
confidence_weight— low-confidence decisions contribute less
temporal_decay— older decisions decay in significance

Pairs above the significance threshold are validated by an LLM, which classifies the relationship as contradiction, supersession, complementary, refinement, or unrelated. Only genuine conflicts are stored.

open
acknowledged
resolved
or
wont_fix
04
Quick start

Get running in minutes.

# Everything in Docker: TimescaleDB, Qdrant, Ollama, Akashi server
docker compose -f docker-compose.complete.yml up -d

# First run downloads two Ollama models (~7GB). Watch progress:
docker compose -f docker-compose.complete.yml logs -f ollama-init

# Server ready when you see a "listening" log line
curl http://localhost:8080/health
# Open http://localhost:8080 for the audit dashboard

Full feature set out of the box: LLM conflict validation, vector search, real-time SSE, multi-tenancy, audit dashboard. No API keys or external accounts needed.

# Bring your own cloud: TimescaleDB + Qdrant (Postgres-only also works)
cp docker/env.example .env
# Edit .env — minimum required:
DATABASE_URL=postgres://user:pass@host:5432/akashi
AKASHI_ADMIN_API_KEY=your-api-key

docker compose up -d

# Generate persistent JWT signing keys (run once)
go run ./scripts/genkey -out data/
# Add to .env:
AKASHI_JWT_PRIVATE_KEY=/data/jwt_private.pem
AKASHI_JWT_PUBLIC_KEY=/data/jwt_public.pem

Run Akashi on any cloud that can serve a Docker container and a Postgres database. Qdrant and Ollama are optional — the server falls back to text search without them. See the self-hosting guide.

05
MCP integration

One line to wire any
MCP-compatible agent.

# Claude Code — never expires, survives server restarts
claude mcp add --transport http --scope user akashi http://localhost:8080/mcp \
  --header "Authorization: ApiKey admin:$AKASHI_ADMIN_API_KEY"
akashi_check

Semantic precedent lookup before deciding. Returns relevant past decisions, active conflicts, and winning approaches from resolved conflicts.

akashi_trace

Record a decision with reasoning, alternatives, evidence, and confidence. Triggers embedding, conflict detection, and SSE notification.

akashi_assess

Mark a past decision as correct, incorrect, or partially correct. Assessments feed back into search re-ranking.

akashi_query

Filter decisions by type, agent, confidence range, or free-text semantic search. Structured or unstructured.

akashi_conflicts

List grouped conflicts between agents. Filter by severity, category, and status. Returns representative examples per conflict group.

akashi_stats

Aggregate health metrics: decision volume, conflict rate, mean confidence, and assessment outcomes across the org.

06
Audit trail

Everything captured.
Nothing inferred.

Every decision trace records the full context at the moment of decision — not a summary reconstructed later.

DEC

Decision + confidence

What was chosen, expressed as a concrete outcome, with a 0–1 confidence score from the agent.

RSN

Reasoning

Step-by-step logic explaining why this outcome was chosen over alternatives.

ALT

Rejected alternatives

Every option considered — with scores, rationale, and why each was not selected.

EVD

Supporting evidence

What information backed the decision: URLs, analysis, observations — with source type and provenance.

CNF

Conflicts

Semantically detected disagreements between agents on the same question, with severity and lifecycle state.

INT

Integrity proof

SHA-256 content hash and Merkle tree batch verification. Tamper detection without external infrastructure.

TMP

Bi-temporal model

Both business time (valid_from/valid_to) and transaction time — point-in-time queries and full history.

AGT

Agent identity

Session ID, tool, model, and repo context — automatically enriched from MCP session and HTTP headers.

07
SDKs

Native clients for Go,
Python, and TypeScript.

Go Go SDK
go get github.com/ashita-ai/akashi/sdk/go/akashi

Idiomatic Go client with token management, all six operations, and testable interfaces. Used in the Akashi server itself.

View source →
Py Python SDK
pip install git+https://github.com/ashita-ai/akashi.git
#subdirectory=sdk/python

Sync and async clients. Includes AkashiCallbackHandler for LangChain and a CrewAI hooks adapter.

View source →
TS TypeScript SDK
npm install github:ashita-ai/akashi#path:sdk/typescript

Full TypeScript types. Includes createAkashiMiddleware for the Vercel AI SDK.

View source →