About RunAIHome
RunAIHome is a practical resource for people who want to run AI on their own hardware. Not enterprise GPUs. Not cloud APIs you can't audit. Just a consumer card, an OS you control, and software you understand.
Why this site exists
The local AI scene moves fast and most coverage either lives in scattered Reddit threads or in breathless news posts that never get to the actual numbers. We do the slow, boring work: repeatable benchmarks, end-to-end tutorials, and side-by-side comparisons that survive past the next driver update.
What you'll find here
- GPU benchmarks for AI workloads — Stable Diffusion, ComfyUI, Flux, local LLM inference, training on limited VRAM.
- Step-by-step tutorials — Ollama, llama.cpp, ComfyUI workflows, LoRA training, quantization.
- Honest tool comparisons — Ollama vs LM Studio, ComfyUI vs A1111, cloud vs local for specific workloads.
- Self-hosted AI stack — Open WebUI, n8n automations, privacy-first setups.
Editorial approach
Every benchmark you read here was run on real hardware we own, with the exact configuration documented. Every tutorial was followed end-to-end before publishing. Affiliate links exist where they make sense (you can usually tell), but they never decide which tool we recommend.
Contact
Questions, corrections, or hardware you'd like us to test? Reach out via the contact email listed on our GitHub or wait for the contact page coming soon.