Skip to content

Instantly share code, notes, and snippets.

@danielrosehill
Created December 6, 2025 21:21
Show Gist options
  • Select an option

  • Save danielrosehill/7965ffe815338f94450fcdba415df94e to your computer and use it in GitHub Desktop.

Select an option

Save danielrosehill/7965ffe815338f94450fcdba415df94e to your computer and use it in GitHub Desktop.
Prevent ComfyUI custom nodes from overwriting PyTorch ROCm with CUDA version

Preventing ComfyUI Custom Nodes from Overwriting PyTorch ROCm

The Problem

When using ComfyUI with an AMD GPU and PyTorch ROCm, installing custom nodes via ComfyUI-Manager (or manually via pip) can silently replace your ROCm-enabled PyTorch with the default CUDA version.

This happens because:

  1. Custom nodes specify torch as a dependency in their requirements.txt
  2. pip resolves this to the default PyPI torch package (CUDA version)
  3. Your ROCm torch gets overwritten without warning

Symptoms

After installing custom nodes, ComfyUI fails with:

RuntimeError: Found no NVIDIA driver on your system.

Or you see in the startup logs:

pytorch version: 2.8.0+cu128  # CUDA version instead of ROCm

When it should show:

pytorch version: 2.5.1+rocm6.2

The Solution

Use pip constraints to pin the ROCm PyTorch versions so they cannot be replaced.

Step 1: Install PyTorch ROCm

# Activate your ComfyUI environment
conda activate comfyui  # or source your venv

# Install PyTorch with ROCm support
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.2

Step 2: Create a Constraints File

Create a file that pins the exact ROCm versions:

# For conda environments
cat > ~/miniconda3/envs/YOUR_ENV_NAME/pip-constraints.txt << 'EOF'
# Pin PyTorch ROCm - DO NOT let pip replace these
torch==2.5.1+rocm6.2
torchvision==0.20.1+rocm6.2
torchaudio==2.5.1+rocm6.2
EOF

Or for a venv:

cat > /path/to/your/venv/pip-constraints.txt << 'EOF'
torch==2.5.1+rocm6.2
torchvision==0.20.1+rocm6.2
torchaudio==2.5.1+rocm6.2
EOF

Step 3: Configure pip to Use Constraints

Create a pip configuration that automatically applies these constraints:

# For conda
mkdir -p ~/miniconda3/envs/YOUR_ENV_NAME/pip
cat > ~/miniconda3/envs/YOUR_ENV_NAME/pip/pip.conf << 'EOF'
[install]
constraint = /home/YOUR_USERNAME/miniconda3/envs/YOUR_ENV_NAME/pip-constraints.txt
EOF

Or for a venv:

mkdir -p /path/to/your/venv/pip
cat > /path/to/your/venv/pip/pip.conf << 'EOF'
[install]
constraint = /path/to/your/venv/pip-constraints.txt
EOF

How It Works

With this configuration:

  • Any pip install command in this environment will respect the constraints
  • If a custom node tries to install torch, pip will see the constraint and keep your ROCm version
  • You'll see warnings about version conflicts, but your ROCm torch stays intact

Verifying It Works

After setup, verify your PyTorch is correct:

python -c "import torch; print(f'PyTorch: {torch.__version__}'); print(f'CUDA available: {torch.cuda.is_available()}')"

Should output:

PyTorch: 2.5.1+rocm6.2
CUDA available: True  # ROCm presents as CUDA

Notes

  • Update the version numbers in the constraints file when you intentionally upgrade PyTorch ROCm
  • Some custom nodes may complain about version conflicts - this is expected and usually harmless
  • The constraints only apply to this specific environment, not system-wide

Environment Details

Tested on:

  • Ubuntu 25.04
  • AMD Radeon RX 7700 XT (gfx1101)
  • ROCm 6.2
  • ComfyUI 0.3.77

This gist was generated by Claude Code. Please verify any information before relying on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment