GitHub Copilot is a cloud-only plugin. Cursor is a VS Code fork with a subscription. Bodega One is a local-first desktop IDE with a one-time price. The right choice depends on whether you need local/offline capability and whether you want to own what you pay for.
These three tools get compared a lot, but they are completely different products. Comparing them on a single feature chart misses the point. This post covers what each tool actually is, who it's built for, and what you actually get for the money.
GitHub Copilot
Copilot is a VS Code (and JetBrains) plugin that adds AI completion and chat to your existing editor. It is cloud-only. All inference runs on GitHub's servers using Microsoft/OpenAI models. You keep your existing IDE, your existing keybindings, your existing workflow. AI is added as an overlay.
The Tab completion is Copilot's strongest feature. It is well-integrated with VS Code's completion UI and has had years of training on real code. The chat (“Copilot Chat”) is solid for questions and quick refactors.
What Copilot lacks: an autonomous agent that can work through multi-step tasks without per-step approval, offline/local model support, and any path to non-subscription ownership.
- Price: $10/month (Individual) or $19/month (Business)
- Local LLM support: No
- Offline capable: No
- Agent: Copilot Agent (experimental, limited)
- Best for: VS Code / JetBrains users who want to add AI without switching editors
Cursor
Cursor is a full VS Code fork with AI built into every layer. Because it's a fork and not a plugin, the team can integrate AI more deeply. The Tab completion predicts multi-line changes, codebase indexing lets the AI understand your whole project, and the agent (Composer in agent mode) can work through tasks with filesystem access.
It is cloud-first. The default models are GPT-4o and Claude Sonnet, billed through Cursor's usage quota. There's local model support in settings, but it's not the primary path and some features (Tab completion, codebase indexing) still need cloud connectivity.
- Price: $20/month (Pro) or $40/user/month (Teams)
- Local LLM support: Partial (cloud features still require connectivity)
- Offline capable: No (core features require internet)
- Agent: Composer (agent mode), strong and mature
- Best for: Teams already in the VS Code ecosystem who want the strongest cloud AI coding UX
Bodega One
Bodega One is a local-first AI desktop environment. It's not a VS Code fork or a plugin. It's a standalone Electron app with a Monaco editor, an AI chat panel, and an autonomous coding agent that runs your own models on your own hardware.
The fundamental difference is ownership. You pay once. You bring your own LLM, any of 15+ provider presets, from Ollama and LM Studio to cloud providers if you want them. There is no subscription, no usage quota, no data sent to Bodega One servers.
The tradeoff compared to Cursor: the codebase indexing is newer, and if you have years of VS Code muscle memory, there's a switching cost. For developers who prioritize data privacy, cost predictability, or working offline, those tradeoffs are worth it.
- Price: $79 one-time (Personal, 2 machines) or $149 one-time (Pro, 5 machines)
- Local LLM support: Yes, fully. Local-first by design.
- Offline capable: Yes. Air-gap mode enforces zero egress.
- Agent: Autonomous agent with QEL verification, runs multi-step tasks without per-step approval
- Best for: Developers who want to own their tools, run local models, and avoid perpetual subscriptions
Side-by-side comparison
| Feature | GitHub Copilot | Cursor | Bodega One |
|---|---|---|---|
| Type | Plugin (VS Code/JetBrains) | VS Code fork | Standalone desktop app |
| Pricing model | Subscription ($10+/mo) | Subscription ($20+/mo) | One-time ($79-149) |
| Local LLM support | No | Partial | Yes (15+ providers) |
| Offline / air-gap | No | No | Yes (enforced) |
| Autonomous agent | Limited | Strong (Composer) | Yes (with QEL verification) |
| Data stays local | No | No (by default) | Yes (by default) |
| 3-year cost (1 dev) | $360+ | $720+ | $79 (no renewals) |
How to choose
- You're happy with VS Code and just want AI completions: GitHub Copilot. Least disruption, reasonable price, solid completions.
- You want the strongest AI coding UX and will pay monthly: Cursor. Codebase indexing and Composer are ahead of the field for cloud AI workflows.
- You want to own your tools, use local models, or work offline: Bodega One. One-time price, BYOLLM, air-gap capable.
- You work in a regulated environment (finance, healthcare, defense): Bodega One is the only one of the three with genuine air-gap enforcement. See the regulated industries post.
For a deeper look at the cost math over 3 years, see The real cost of AI subscriptions.
Common questions
- What's the main difference between Bodega One and Cursor?
- Cursor is a VS Code fork designed around hosted cloud models and costs $20/month. Bodega One is a full desktop IDE built local-first from day one with a one-time $79 purchase, no monthly subscription, and supports 15+ LLM provider presets including local models like Ollama.
- How much does GitHub Copilot cost compared to Bodega One?
- GitHub Copilot Pro is $10/month and caps at 300 premium requests/month, costing $120/year. Bodega One is a one-time $79 purchase with unlimited local inference when using your own model. The standard developer AI stack costs roughly $840/year or $2,520 over three years, while Bodega One pays for itself in under two months.
- Can I use local models with Cursor and GitHub Copilot?
- Cursor supports OpenAI-compatible endpoints like Ollama but was designed around cloud models and is cloud-first. GitHub Copilot runs on OpenAI infrastructure only. Bodega One is built local-first by design with 10+ provider presets including Ollama, LM Studio, vLLM, and llama.cpp, plus air-gap mode that enforces zero network egress.
- Does Bodega One have an autonomous agent like Cursor's Composer?
- Yes. Bodega One includes an autonomous coding agent with 23 built-in tools including file operations, web search, and shell execution. It runs through a Quality Enforcement Layer (QEL) that verifies code compiles and passes tests before you see it, ensuring the agent is responsible for delivering working code, not just trying.
Related posts
Ready to own your tools?
Beta is live now. Join the waitlist for full launch.