Hivemind only works on Linux. Thankfully you can still use WSL to run the training.
Follow this guide: https://learn.microsoft.com/es-es/windows/wsl/install
Basically, you should open a CMD window (the black window with white words) and type:
| # Install sudo first | |
| sudo apt update | |
| sudo apt install software-properties-common -y | |
| sudo add-apt-repository --yes ppa:deadsnakes/ppa | |
| sudo apt update && sudo apt install python3.10 python3.10-venv python3.10-dev -y | |
| python3.10 -m venv env | |
| source env/bin/activate |
| #This module is meant for direct use only. For API-usage please check SDA-TRAINER. | |
| #Based off NVIDIA's demo | |
| import argparse | |
| from threads.trt.models import CLIP, UNet, VAE | |
| import os | |
| import onnx | |
| import torch | |
| from diffusers import UNet2DConditionModel, AutoencoderKL | |
| from transformers import CLIPTextModel | |
| from threads.trt.utilities import Engine |
| diffusers>=0.5.1 | |
| numpy==1.23.4 | |
| wandb==0.13.4 | |
| torch | |
| torchvision | |
| transformers>=4.21.0 | |
| huggingface-hub>=0.10.0 | |
| Pillow==9.2.0 | |
| tqdm==4.64.1 | |
| ftfy==6.1.1 |
| # Install bitsandbytes: | |
| # `nvcc --version` to get CUDA version. | |
| # `pip install -i https://test.pypi.org/simple/ bitsandbytes-cudaXXX` to install for current CUDA. | |
| # Example Usage: | |
| # Single GPU: torchrun --nproc_per_node=1 trainer/diffusers_trainer.py --model="CompVis/stable-diffusion-v1-4" --run_name="liminal" --dataset="liminal-dataset" --hf_token="hf_blablabla" --bucket_side_min=64 --use_8bit_adam=True --gradient_checkpointing=True --batch_size=1 --fp16=True --image_log_steps=250 --epochs=20 --resolution=768 --use_ema=True | |
| # Multiple GPUs: torchrun --nproc_per_node=N trainer/diffusers_trainer.py --model="CompVis/stable-diffusion-v1-4" --run_name="liminal" --dataset="liminal-dataset" --hf_token="hf_blablabla" --bucket_side_min=64 --use_8bit_adam=True --gradient_checkpointing=True --batch_size=10 --fp16=True --image_log_steps=250 --epochs=20 --resolution=768 --use_ema=True | |
| import argparse | |
| import socket | |
| import sys |
Hivemind only works on Linux. Thankfully you can still use WSL to run the training.
Follow this guide: https://learn.microsoft.com/es-es/windows/wsl/install
Basically, you should open a CMD window (the black window with white words) and type:
| torchrun --nproc_per_node=1 \ | |
| train.py \ | |
| --workingdirectory hivemindtemp \ | |
| --wantedimages 500 \ | |
| --datasetserver="DATASET_SERVER_IP" \ | |
| --node="true" \ | |
| --o_port1=LOCAL_TCP_PORT \ | |
| --o_port2=LOCAL_UDP_PORT \ | |
| --ip_is_different="true" \ | |
| --p_ip="PUBLIC_IP" \ |
| # Install bitsandbytes: | |
| # `nvcc --version` to get CUDA version. | |
| # `pip install -i https://test.pypi.org/simple/ bitsandbytes-cudaXXX` to install for current CUDA. | |
| # Example Usage: | |
| # Single GPU: torchrun --nproc_per_node=1 trainer/diffusers_trainer.py --model="CompVis/stable-diffusion-v1-4" --run_name="liminal" --dataset="liminal-dataset" --hf_token="hf_blablabla" --bucket_side_min=64 --use_8bit_adam=True --gradient_checkpointing=True --batch_size=1 --fp16=True --image_log_steps=250 --epochs=20 --resolution=768 --use_ema=True | |
| # Multiple GPUs: torchrun --nproc_per_node=N trainer/diffusers_trainer.py --model="CompVis/stable-diffusion-v1-4" --run_name="liminal" --dataset="liminal-dataset" --hf_token="hf_blablabla" --bucket_side_min=64 --use_8bit_adam=True --gradient_checkpointing=True --batch_size=10 --fp16=True --image_log_steps=250 --epochs=20 --resolution=768 --use_ema=True | |
| import argparse | |
| import socket | |
| import torch |
| #!/bin/bash | |
| #Install deps | |
| apt-get update -y | |
| apt-get install htop screen psmisc python3-pip unzip wget gcc g++ nano -y | |
| #Install Python deps | |
| wget https://gist.githubusercontent.com/chavinlo/fe8afc02e03d9cc4eb545c4c306c8a73/raw/d9a5ad446fe662dc3e6597163a1f8d5546a8a795/requirements.txt | |
| pip install -r requirements.txt OmegaConf | |
| pip install triton==2.0.0.dev20221120 |
| # Install bitsandbytes: | |
| # `nvcc --version` to get CUDA version. | |
| # `pip install -i https://test.pypi.org/simple/ bitsandbytes-cudaXXX` to install for current CUDA. | |
| # Example Usage: | |
| # Single GPU: torchrun --nproc_per_node=1 trainer/diffusers_trainer.py --model="CompVis/stable-diffusion-v1-4" --run_name="liminal" --dataset="liminal-dataset" --hf_token="hf_blablabla" --bucket_side_min=64 --use_8bit_adam=True --gradient_checkpointing=True --batch_size=1 --fp16=True --image_log_steps=250 --epochs=20 --resolution=768 --use_ema=True | |
| # Multiple GPUs: torchrun --nproc_per_node=N trainer/diffusers_trainer.py --model="CompVis/stable-diffusion-v1-4" --run_name="liminal" --dataset="liminal-dataset" --hf_token="hf_blablabla" --bucket_side_min=64 --use_8bit_adam=True --gradient_checkpointing=True --batch_size=10 --fp16=True --image_log_steps=250 --epochs=20 --resolution=768 --use_ema=True | |
| import argparse | |
| import socket | |
| import torch |
| #!/bin/bash | |
| #Install deps | |
| apt-get update -y | |
| apt-get install htop screen psmisc python3-pip unzip wget gcc g++ nano -y | |
| #Install Python deps | |
| wget https://gist.githubusercontent.com/chavinlo/fe8afc02e03d9cc4eb545c4c306c8a73/raw/d9a5ad446fe662dc3e6597163a1f8d5546a8a795/requirements.txt | |
| pip install -r requirements.txt OmegaConf | |
| pip install triton==2.0.0.dev20221120 |