Last active
August 22, 2025 18:03
-
-
Save michaelkamprath/8f44ab896c74a2d6579df9779e5d302e to your computer and use it in GitHub Desktop.
A docker compose file to launch an integrated Frigate / Ollama / Open WebUI service using an NVIDIA GPU for computation.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # This docker compose file will deploy an integrated Frigate / Ollama / Open WebUI deployment | |
| # using an NVIDIA GPU for Frigate model inferencing the Ollama LLMs. | |
| # | |
| # Installation | |
| # | |
| # * Ensure your docker deployment is enabled to provide GPU services: | |
| # - https://docs.docker.com/compose/how-tos/gpu-support/ | |
| # | |
| # * Create a directory containing the following sub-directories: | |
| # - frigate/config | |
| # - ollama | |
| # - open-webui | |
| # | |
| # * Place this compose.yaml file in that directory with the above sub-directories. | |
| # | |
| # * Identify where you want Frigate to store its history recordings. This location | |
| # should be several hundred GBs if not several TBs in size. Update the | |
| # "/mnt/storage/frigate" path below to the location you select. | |
| # | |
| # * Add an environment variable file at frigate/frigate.env that contains | |
| # the environment variables you need in your Frigate config, such as | |
| # camera passwords. | |
| # | |
| # * Place your Frigate configuration file in "frigate/config/config.yml". To take advantage | |
| # of Ollama for generative AI in Frigate, your Frigate config should have this section: | |
| # | |
| # genai: | |
| # enabled: true | |
| # provider: ollama | |
| # base_url: http://ollama:11434 | |
| # model: <<select a pulled model>> | |
| # | |
| # Follow Frigate's instructions on how to then use the generative AI feature: | |
| # - https://docs.frigate.video/configuration/genai/ | |
| # | |
| # * You may reduce the model list in the YOLO_MODELS variable below to be the | |
| # one you selected in your Frigate config. This will save time in the initial launch | |
| # of Frigate in this deployment. | |
| # | |
| # * Launch service with: | |
| # | |
| # docker compose up -d | |
| # | |
| # The first time you do this Frigate needs to build the YOLO models, which will take a | |
| # while. You will not be able to view the Frigate website until it is complete. Check | |
| # logs with | |
| # | |
| # docker compose logs --tail 100 -f | |
| # | |
| # Note that "docker ps" may report the Frigate container as unhealthy while this model | |
| # build process is occurring. | |
| # | |
| # * Pull models for Ollama using its running Docker instance with: | |
| # | |
| # docker exec -it ollama ollama pull <<model name>> | |
| # | |
| # * Monitor Ollama's model load (thus identifying if GPU is being used) with: | |
| # | |
| # watch docker exec -it ollama ollama ps | |
| # | |
| # * You may view the services at the following URLs: | |
| # - Frigate: http://<<your-server>>:5000/ | |
| # - Open WebUI (Ollama chat UI): http://<<your-server>>:3000/ | |
| # | |
| services: | |
| frigate: | |
| container_name: frigate | |
| privileged: true | |
| restart: unless-stopped | |
| pull_policy: always | |
| image: ghcr.io/blakeblackshear/frigate:stable-tensorrt | |
| shm_size: "2000mb" # update for your cameras. See frigate documentation. | |
| deploy: | |
| resources: | |
| reservations: | |
| devices: | |
| - driver: nvidia | |
| count: 1 | |
| capabilities: [gpu] | |
| # Other inferencing devices you might consider | |
| # devices: | |
| # - /dev/bus/usb:/dev/bus/usb # passes the USB Coral, needs to be modified for other versions | |
| # - /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux | |
| # - /dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware | |
| volumes: | |
| - /etc/localtime:/etc/localtime:ro | |
| - ./frigate/config:/config | |
| # change the file path to where you want history to be stored | |
| - /mnt/storage/frigate:/media/frigate | |
| - type: tmpfs # Optional: 2GB of memory, reduces SSD wear | |
| target: /tmp/cache | |
| tmpfs: | |
| size: 2000000000 | |
| ports: | |
| - "5000:5000" | |
| - "8554:8554" # RTSP feeds | |
| - "8555:8555/tcp" # WebRTC over tcp | |
| - "8555:8555/udp" # WebRTC over udp | |
| - "8971:8971" | |
| env_file: | |
| - ./frigate/frigate.env | |
| environment: | |
| - YOLO_MODELS=yolov7-320,yolov7-640,yolov7-tiny-416,yolov7x-320,yolov7x-640 | |
| ollama: | |
| container_name: ollama | |
| restart: unless-stopped | |
| pull_policy: always | |
| image: ollama/ollama | |
| deploy: | |
| resources: | |
| reservations: | |
| devices: | |
| - driver: nvidia | |
| count: all | |
| capabilities: [gpu] | |
| volumes: | |
| - ./ollama:/root/.ollama | |
| ports: | |
| - "11434:11434" | |
| open-webui: | |
| container_name: open-webui | |
| pull_policy: always | |
| restart: unless-stopped | |
| image: ghcr.io/open-webui/open-webui:main | |
| volumes: | |
| - ./open-webui:/app/backend/data | |
| environment: | |
| - OLLAMA_BASE_URL=http://ollama:11434 | |
| ports: | |
| - "3000:8080" |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment