Everything you need to get started.
Bodega One is built to be picked up fast. Install it, connect your LLM, and you're building with AI in minutes. No config files to wrestle with.
Up and running in three steps.
No Docker. No environment setup. No CLI.
Download and install
Grab the installer for your platform: Windows, macOS, or Linux. Double-click, follow the prompts. No PATH setup, no CLI required.
Connect your LLM
Open Settings → LLM Providers. Pick a preset (Ollama for local, OpenAI or Anthropic for cloud) and paste your API key or endpoint URL. Takes about 30 seconds.
Start building
Open a folder in Code Mode or start a conversation in Chat Mode. The AI has full access to your files and 23 built-in tools from the first message.
Beta is live now. Join the waitlist to be first in line.
Watch it in action.
Two short walkthroughs covering the most common ways to bring a local LLM into Bodega One.
Ollama in under a minute
Already running Ollama? Bodega One auto-detects it. New to Ollama? We walk through the install, pulling your first model, and using it in Code mode.
Local llama.cpp, no CLI
Managed runtime, GGUF catalog, hot-swap, crash recovery. Click a model and it loads. Pull anything from Hugging Face without touching a config file.
More walkthroughs on the Bodega One YouTube channel.
Four things to understand.
These aren't marketing terms. They're the actual architecture of how Bodega One works.
Code Mode
Full IDE with an AI agent
Monaco editor, file tree, multi-terminal, and an autonomous coding agent in one window. The agent writes real diffs, not suggestions. You review and apply.
Chat Mode
Conversational AI with real tools
Full-screen AI chat with persistent memory and 23 built-in tools. The AI can read files, run shell commands, search the web, and more. All from the conversation.
QEL
Quality Enforcement Layer
Every code change the agent writes passes through 5 verification stages before you see it: contract extraction, incremental checks, proof gates (tsc, pytest, py_compile), and targeted line-level repair. The AI can't game its own checks.
How QEL works →BYOLLM
Bring Your Own LLM
Connect any of 10+ supported providers. Run Ollama locally for full privacy. Switch to Claude for complex reasoning. Swap models any time. Never locked in.
What BYOLLM means →Connect any model you want.
15 provider presets built in. Open Settings → LLM Providers, pick a preset, enter your key or local endpoint. Takes about 30 seconds.
Local: runs on your machine
Ollamarecommended
Best for privacy: runs fully local
LM Studio
Local models with a GUI
vLLM
High-throughput local serving
llama.cpp
Lightweight GGUF inference
LocalAI
OpenAI-compatible local API
KoboldCpp
Flexible GGUF serving
GPT4All
Desktop local models
MLX
Optimized for Apple Silicon
Jan
Local AI desktop client
Cloud: bring your own key
OpenAI
GPT-4o, o3, and the o-series
Groq
Fast inference for open models
Together AI
Open models at scale
OpenRouter
Multi-provider gateway
Azure OpenAI
Enterprise OpenAI deployment
Custom endpoint
Any OpenAI-compatible API
Using Ollama? That's the fastest path to full privacy.
Install Ollama, pull a model (ollama pull llama3.2), then set the endpoint to http://localhost:11434 in Bodega One. Nothing leaves your machine.
Pick a model for your hardware.
Not sure what to run? Match your GPU to the right model. These are tested recommendations for agentic coding workloads in Bodega One.
| Your GPU / RAM | Recommended model | Quality |
|---|---|---|
| < 4 GB | SmolLM3-3B Q4 | Basic |
| 6–8 GB | Qwen3.5-9B Q4_K_M | Good |
| 8–12 GB | Qwen3.5-27B Q4 | Strong |
| 12–16 GB | Gemma 4 26B MoE Q4 | Excellent |
| 16–24 GB | Qwen3.6-27B Q4 | Gold |
| 24–32 GB | Qwen3.6-35B-A3B Q4 | Gold |
| 48 GB+ | GLM-5.1 / Llama 4 Scout | Frontier |
| Apple Silicon 16 GB | Qwen3.5-9B MLX Q4 | Good |
| Apple Silicon 64 GB+ | Qwen3.6-27B MLX | Gold |
| Apple Silicon 128 GB+ | GLM-5.1 MLX / Qwen3.6-35B MLX | Frontier |
Not sure how much VRAM you have? Run nvidia-smi on Windows/Linux or check System Report → Graphics on Mac. VRAM calculator →
Still have questions?
Discord is the fastest way to get answers from the team and other beta users. Or join the waitlist and we'll walk you through setup on day one.