When using ComfyUI with an AMD GPU and PyTorch ROCm, installing custom nodes via ComfyUI-Manager (or manually via pip) can silently replace your ROCm-enabled PyTorch with the default CUDA version.
This happens because:
- Custom nodes specify
torchas a dependency in theirrequirements.txt - pip resolves this to the default PyPI torch package (CUDA version)
- Your ROCm torch gets overwritten without warning
After installing custom nodes, ComfyUI fails with:
RuntimeError: Found no NVIDIA driver on your system.
Or you see in the startup logs:
pytorch version: 2.8.0+cu128 # CUDA version instead of ROCm
When it should show:
pytorch version: 2.5.1+rocm6.2
Use pip constraints to pin the ROCm PyTorch versions so they cannot be replaced.
# Activate your ComfyUI environment
conda activate comfyui # or source your venv
# Install PyTorch with ROCm support
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.2Create a file that pins the exact ROCm versions:
# For conda environments
cat > ~/miniconda3/envs/YOUR_ENV_NAME/pip-constraints.txt << 'EOF'
# Pin PyTorch ROCm - DO NOT let pip replace these
torch==2.5.1+rocm6.2
torchvision==0.20.1+rocm6.2
torchaudio==2.5.1+rocm6.2
EOFOr for a venv:
cat > /path/to/your/venv/pip-constraints.txt << 'EOF'
torch==2.5.1+rocm6.2
torchvision==0.20.1+rocm6.2
torchaudio==2.5.1+rocm6.2
EOFCreate a pip configuration that automatically applies these constraints:
# For conda
mkdir -p ~/miniconda3/envs/YOUR_ENV_NAME/pip
cat > ~/miniconda3/envs/YOUR_ENV_NAME/pip/pip.conf << 'EOF'
[install]
constraint = /home/YOUR_USERNAME/miniconda3/envs/YOUR_ENV_NAME/pip-constraints.txt
EOFOr for a venv:
mkdir -p /path/to/your/venv/pip
cat > /path/to/your/venv/pip/pip.conf << 'EOF'
[install]
constraint = /path/to/your/venv/pip-constraints.txt
EOFWith this configuration:
- Any
pip installcommand in this environment will respect the constraints - If a custom node tries to install
torch, pip will see the constraint and keep your ROCm version - You'll see warnings about version conflicts, but your ROCm torch stays intact
After setup, verify your PyTorch is correct:
python -c "import torch; print(f'PyTorch: {torch.__version__}'); print(f'CUDA available: {torch.cuda.is_available()}')"Should output:
PyTorch: 2.5.1+rocm6.2
CUDA available: True # ROCm presents as CUDA
- Update the version numbers in the constraints file when you intentionally upgrade PyTorch ROCm
- Some custom nodes may complain about version conflicts - this is expected and usually harmless
- The constraints only apply to this specific environment, not system-wide
Tested on:
- Ubuntu 25.04
- AMD Radeon RX 7700 XT (gfx1101)
- ROCm 6.2
- ComfyUI 0.3.77
This gist was generated by Claude Code. Please verify any information before relying on it.