Ollama + Open-WebUI + NVIDIA GPU A Docker Compose stack for running local LLMs on your own hardware with NVIDIA GPU acceleration, a web UI, and one-command model setup via profiles. What's Included