an editor for the post-typing era

three agents,
one terminal,
one repo.

gcode is the open editor where claude, gpt-5 and your own local models work the same codebase in parallel — branching worktrees, sharing memory, and streaming voice. you stay in the terminal you already love.

try it free → $brew install gcode
runs on macOS · Linux · Windows (WSL) · Apache 2.0 · ★ 18.4k
~/code/kanban-app — gcode 3 agents · live
~/kanban-app ❯ claude "add a 'review' column to the board, drag-and-drop with motion" ┌─ planning · 2.1s claude read BoardStore.ts, BoardColumn.tsx, kanban.module.css claude drafted 3 file edits · 2 new tests · 1 storybook story └─ ready apply 3 edits → y/n ~/kanban-app ❯ y BoardStore.ts +24 −2 BoardColumn.tsx +58 −12 kanban.module.css +18 −0 vitest passed (12 tests, 218ms) ~/kanban-app ❯
used in production at
01 — orchestration

let three agents row in parallel, then bring the boats home.

every running task spawns its own git worktree, so claude, gpt-5 and cody never clobber each other's edits. when one is ready, it opens a PR back to your branch — and your kanban shows the diff live.

cl
claude · sonnet 4.5
#GC-471 · replacing AsciiMotionCli component · wt-471
14m
gp
gpt-5 · standard
#GC-465 · DPDP region routing in middleware · wt-465
1h 12m
cd
cody · cursor-agent
#GC-468 · re-style Terminal panel · wt-468
3m
ll
local · llama-3.3 70b
#GC-485 · stub failing e2e tests · queued
queued
auto-commit on auto-pr off 3 / 5 slots used
  • isolated worktrees, shared memory
    each agent writes into its own checkout — the kanban surface, the editor and the bridge all read from the same memory file so context never gets stale.
  • model-agnostic by design
    claude, gpt-5, gemini, cursor-agent, local llama. swap mid-task — gcode keeps the conversation, file edits and approval queue intact.
  • approvals on by default
    no `--yolo`. every rm, install or destructive write surfaces in the queue with a one-key approve.
02 — editor

cursor-fast.
vim-quiet. terminal-native.

tabs, gutters, breadcrumbs, language servers — gcode is a real editor. but the right pane is your agent stream, the bottom row is your terminal, and ⌘. routes any squiggle to claude.

~/kanban-app / src / features / board / BoardColumn.tsx ● 3 agents
BoardStore.ts BoardColumn.tsx useDragSort.ts kanban.module.css ● claude · planning
▾ src
▾ features/board
BoardStore.ts
BoardColumn.tsx
CardSurface.tsx
useDragSort.ts +
▸ filters
▸ comments
▸ lib
▸ hooks
// agent:claude · #GC-471 · 14m export function BoardColumn({ id, label }) { const cards = useBoardStore(s => s.byColumn[id]) const move = useBoardStore(s => s.moveCard) const bridge = useGaveduBridge() return ( <Reorder.Group axis="y" values={cards}> {cards.map(c => <CardSurface key={c.id} card={c} />)} </Reorder.Group> ) }
plan. three steps. write BrandedSplash, swap the import, delete the old ASCII art.
write · BrandedSplash.tsx · +128 −0
edit · cli/index.tsx · +1 −1
bash · npm run test:cli · 11 ✓
approval needed: delete AsciiMotionCli.tsx
03 — voice

stop typing in five-line
ide chat boxes. just talk.

push-to-talk runs whisper.cpp locally, pipes the transcript straight to your agent, and reads the response back through piper. no cloud round-trip, no broken kids-asleep podcast routine.

0:14
"ok so the kanban — when I drag a card from review back to in-progress, it should reopen the worktree but keep the diff. also let's add a comment thread on each card."
got it. two changes — a "reopen" handler on the column transition that re-checks-out wt-{id}, and a new /comments sub-feature with optimistic updates. I'll start with the reopen handler since it's the smaller diff.
whisper.cpp · 142ms· claude · 1.4s ttft· piper · streaming
  • push-to-talk, hold space
    no wake word, no waiting for silence. talk while you scroll, code, or pace the kitchen.
  • streaming transcript
    tokens land in your prompt as you say them. the agent starts thinking before you finish.
  • 100% on-device
    audio never leaves your machine. only the final text is sent to the model you chose.
04 — your machine, your rules

no telemetry,
no vendor lock,
no surprise bills.

we're an editor, not a model reseller. bring your own keys, run llama on your m4, ship gcode behind a firewall — same product, full features.

— 01

bring your own keys

plug in your anthropic, openai, gemini, mistral, fireworks or together keys. we never proxy, never cache, never charge a margin on tokens.

OPENAI · ANTHROPIC · GEMINI · +9 more
— 02

run local models

ollama, lm-studio, vllm — first-class. spin up llama-3.3 on the laptop and gemini on the cloud, and let gcode pick whichever is faster for the task.

OLLAMA · LM-STUDIO · VLLM · LOCAL HF
— 03

open source, Apache 2

fork it, patch it, ship it on-prem to a regulated team. our paid tiers buy hosted bridges and team sync, never the editor itself.

github.com/gcode-dev/gcode · ★ 18.4k
05 — pick your trade-off

cursor handed editing to one agent.
gcode hands it to your team.

we love them. we still ship in cursor sometimes. but here's where each tool actually wins.

gcode cursor aider copilot+
parallel agents on one repo3+ worktrees11 cli1 chat
swap models mid-taskyesrestartrestartno
local-first voicewhisper.cppcloudnocloud
byok pricing$0 markupsubscription + tokens$0subscription
open sourceApache 2closedApache 2closed
terminal & editor in onenativevscode forkcli onlyvscode plugin
06 — heard from the field

shipped from three terminals at once.

"we ran nine concurrent agents through gcode during a launch crunch. the kanban kept all the diffs apart. PR review went from a bottleneck to a tab I keep open."
PB
Priya Banerjee
staff eng · linear
"finally an editor where swapping models is one keystroke. we run cheap local llama for refactors and reach for sonnet only when it actually matters."
JM
Júlio Marques
platform lead · railway
"voice + worktrees has reorganized how I work. I dictate three tasks while walking the dog and have three reviewable PRs when I sit back down."
AT
Aanya Tanaka
indie hacker · sf
07 — pricing

the editor is free, forever.

paid tiers add hosted bridge, team kanban sync and on-prem deployment. tokens are always at provider cost — we never take a cut.

developer $0 / forever

the full editor, single machine, byok. all models, all features.

  • up to 3 concurrent agents
  • local voice (whisper + piper)
  • kanban + worktrees
  • community discord
team $24 / seat / mo

shared kanban, hosted bridge and ssh forwarding so the whole team sees every agent's work in real time.

  • up to 10 agents per seat
  • shared bridge · hosted
  • org sso · seat-level keys
  • priority discord + email
on-prem talk to us

air-gapped install for regulated teams. private model gateway, dpdp/eu region routing, signed builds.

  • unlimited agents
  • private gateway · audit logs
  • signed builds + sbom
  • dedicated solutions eng
install in 30 seconds

three agents,
one terminal, one repo.

brew install gcode, point it at any folder. byok, byov, byo-anything. your editor — yours forever.

install · free → $brew install gcode
Gavedu — gCode