Skip to content

Instantly share code, notes, and snippets.

@ms-2k
ms-2k / cuda-rocm-vulkan.Dockerfile
Last active October 20, 2025 19:06
Dockerfile for llama.cpp built with CUDA (12.8.1), ROCm (7.0.1), and Vulkan backends for RTX 3090 + AMD Instinct MI50 (or any RTX 30-series + gfx906).
ARG UBUNTU_VERSION=24.04
ARG CUDA_VERSION=12.8.1
# Target CUDA base images
ARG BASE_CUDA_DEV_CONTAINER=nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION}
ARG BASE_CUDA_RUN_CONTAINER=nvidia/cuda:${CUDA_VERSION}-runtime-ubuntu${UBUNTU_VERSION}
# Build-time ROCm base (CUDA devel + ROCm)
FROM ${BASE_CUDA_DEV_CONTAINER} AS build-rocm-base