changelog March 1, 2026 · 2 min read

Jupiter v0.1 Alpha: First Public Release

The first public alpha of Jupiter is here. Parallel AI coding agents with shared context and a Rust-native MCP ecosystem.

What shipped

After months of development, we’re releasing Jupiter v0.1 Alpha — the first public version of our parallel AI coding agent orchestrator.

Here’s what’s in this release:

Planner engine

The Planner is Jupiter’s brain. Before any worker touches your code, the Planner:

  • Indexes your codebase — symbols, modules, dependencies
  • Decomposes tasks — breaks a high-level instruction into worker-sized subtasks
  • Generates briefings — each worker gets a context package with relevant code, constraints, and owned symbols
  • Detects conflicts — ensures no two workers modify the same code

Parallel workers

Jupiter workers are AI coding agents enhanced with MCP tools. In v0.1:

  • Up to 4 parallel workers per task
  • Each worker gets a typed briefing from the Planner
  • Workers share state via the memory bus
  • Automatic conflict detection and resolution

MCP ecosystem

16 Rust-native MCP servers integrated:

  • Phase 1: Filesystem, code + git + system tools, Cargo toolchain
  • Phase 2: Code intelligence (narsil-mcp), Rust LSP, Rust docs
  • Phase 3: Web fetch, SQLite, SSH
  • Phase 4: Sequential thinking, error tracking, crate search

All servers are single binaries, zero Node.js dependencies, ~3MB RAM each.

What’s next

Here’s our roadmap for the coming releases:

  • v0.2 — Auto-validation: workers run tests and self-correct before reporting done
  • v0.3 — CLI + GUI: unified interface for managing workers and tasks
  • v0.4 — Remote workers: distribute across machines via SSH + SSE transport
  • v1.0 — Production-ready with full documentation

Getting started

Jupiter is open source under AGPL-3.0. Join our Discord to connect with the community and follow the development.

Thank you

Jupiter is built by a small team that believes AI coding agents can be dramatically better. If you share that vision, we’d love your feedback, bug reports, and contributions.

Let’s build the future of parallel AI development — together.