Blog
Thoughts on local-first AI, developer tools, and what we're building.
Best Tabnine alternatives in 2026
11 min readKilo Pass vs Anthropic API direct: the cost math
10 min readRoo Code is shutting down: where to go next
10 min readBest GitHub Copilot alternatives in 2026
11 min readBest Cursor alternatives in 2026
11 min readSet up an offline AI IDE in 2026: what actually works
9 min readAir-gapped AI coding in 2026: a developer setup guide
8 min readThe best Kilo Code alternatives in 2026 (and who should stay)
8 min readGitHub Copilot is training on your code by default. Here is how to opt out.
5 min readGemma 4: Google's first Apache 2.0 open model is also its best
7 min readHow to use our free VRAM calculator for local LLMs
6 min readAI IDE cost comparison: how much are you really paying?
7 min readHow to plan your LLM context window budget
6 min readThe real cost of Kilo Code in 2026
7 min readHow to migrate from Tabnine to Bodega One
8 min readHow to migrate from Windsurf to Bodega One
8 min readAugment Code Is Sunsetting Completions: What to Do
9 min readAre local LLMs good enough for coding in 2026?
8 min readGitHub Copilot vs Cursor vs Bodega One: honest pick
9 min readHow to run DeepSeek locally with Bodega One
7 min readAir-gapped AI development for regulated industries
7 min readLM Studio + Bodega One: complete setup guide
6 min readAI coding tools that work completely offline (2026)
8 min readWhich GPU for local AI? A developer's guide
7 min readBest local AI IDEs in 2026: honest developer review
10 min readAir-gap mode: 9 layers that guarantee zero network egress
6 min readKV cache: how we get 40-70% reuse per LLM session
5 min readSetting up Ollama with Bodega One: model guide
8 min readHow QEL works: AI that proves its own code
6 min readThe real cost of AI coding subscriptions vs one-time purchase (3-year analysis)
8 min readBYOLLM: what it means and why it matters
7 min readWhy we built a local-first AI IDE
6 min read