Skip to content

Instantly share code, notes, and snippets.

@hughescr
Created September 12, 2024 04:43
Show Gist options
  • Select an option

  • Save hughescr/1a41c533df1a69c7513ff5c97b6b4987 to your computer and use it in GitHub Desktop.

Select an option

Save hughescr/1a41c533df1a69c7513ff5c97b6b4987 to your computer and use it in GitHub Desktop.
Sample config file for Flux LoRA generation on MPS
---
job: extension
config:
# this name will be the folder and filename name
name: "crh_lora_v1"
process:
- type: 'sd_trainer'
# root folder to save training sessions/samples/weights
training_folder: "output"
# uncomment to see performance stats in the terminal every N steps
performance_log_every: 500
device: mps
# if a trigger word is specified, it will be added to captions of training data if it does not already exist
# alternatively, in your captions you can add [trigger] and it will be replaced with the trigger word
# trigger_word: "Craig Hughes"
network:
type: "lora"
linear: 16
linear_alpha: 16
network_kargs:
only_if_contains:
# https://www.reddit.com/r/StableDiffusion/comments/1fdczqy/flux_fine_tuning_with_specific_layers/ advises
# using these proj_out layers only from these specific blocks
- "transformer.single_transformer_blocks.7.proj_out"
- "transformer.single_transformer_blocks.12.proj_out"
- "transformer.single_transformer_blocks.16.proj_out"
- "transformer.single_transformer_blocks.20.proj_out"
# https://www.reddit.com/r/StableDiffusion/comments/1fdczqy/comment/lmewkbi/ advises some different blocks
# but still using only the proj_out layers, and says:
# "block1 and block19 might not even be needed, but id rather train two extra blocks instead of getting a useless LoRA"
# - "transformer.single_transformer_blocks.0.proj_out"
# - "transformer.single_transformer_blocks.1.proj_out"
# - "transformer.single_transformer_blocks.7.proj_out"
# - "transformer.single_transformer_blocks.19.proj_out"
# - "transformer.single_transformer_blocks.20.proj_out"
# https://www.reddit.com/r/StableDiffusion/comments/1fdczqy/comment/lmgfgyb/ advises a bunch of layers for
# each block but NOT the proj_out, and doesn't really comment on WHICH blocks...
# They also have some advice about text layers, but it's not super comprehensible... sorta sounds like they understand
# the architecture, but it's entirely possible they're just bragging and actually an idiot
# - "transformer.single_transformer_blocks.7.proj_mlp"
# - "transformer.single_transformer_blocks.12.proj_mlp"
# - "transformer.single_transformer_blocks.16.proj_mlp"
# - "transformer.single_transformer_blocks.20.proj_mlp"
# - "transformer.single_transformer_blocks.7.attn.to_q"
# - "transformer.single_transformer_blocks.12.attn.to_q"
# - "transformer.single_transformer_blocks.16.attn.to_q"
# - "transformer.single_transformer_blocks.20.attn.to_q"
# - "transformer.single_transformer_blocks.7.attn.to_k"
# - "transformer.single_transformer_blocks.12.attn.to_k"
# - "transformer.single_transformer_blocks.16.attn.to_k"
# - "transformer.single_transformer_blocks.20.attn.to_k"
# - "transformer.single_transformer_blocks.7.attn.to_v"
# - "transformer.single_transformer_blocks.12.attn.to_v"
# - "transformer.single_transformer_blocks.16.attn.to_v"
# - "transformer.single_transformer_blocks.20.attn.to_v"
# Alternative using the blocks from #2 and the layers from #3
# - "transformer.single_transformer_blocks.0.proj_mlp"
# - "transformer.single_transformer_blocks.1.proj_mlp"
# - "transformer.single_transformer_blocks.7.proj_mlp"
# - "transformer.single_transformer_blocks.19.proj_mlp"
# - "transformer.single_transformer_blocks.20.proj_mlp"
# - "transformer.single_transformer_blocks.0.attn.to_q"
# - "transformer.single_transformer_blocks.1.attn.to_q"
# - "transformer.single_transformer_blocks.7.attn.to_q"
# - "transformer.single_transformer_blocks.19.attn.to_q"
# - "transformer.single_transformer_blocks.20.attn.to_q"
# - "transformer.single_transformer_blocks.0.attn.to_k"
# - "transformer.single_transformer_blocks.1.attn.to_k"
# - "transformer.single_transformer_blocks.7.attn.to_k"
# - "transformer.single_transformer_blocks.19.attn.to_k"
# - "transformer.single_transformer_blocks.20.attn.to_k"
# - "transformer.single_transformer_blocks.0.attn.to_v"
# - "transformer.single_transformer_blocks.1.attn.to_v"
# - "transformer.single_transformer_blocks.7.attn.to_v"
# - "transformer.single_transformer_blocks.19.attn.to_v"
# - "transformer.single_transformer_blocks.20.attn.to_v"
save:
dtype: float16 # precision to save
save_every: 250 # save every this many steps
max_step_saves_to_keep: 12 # how many intermittent saves to keep
push_to_hub: false #change this to True to push your trained model to Hugging Face.
# You can either set up a HF_TOKEN env variable or you'll be prompted to log-in
# hf_repo_id: your-username/your-model-slug
# hf_private: true #whether the repo is private or public
datasets:
# datasets are a folder of images. captions need to be txt files with the same name as the image
# for instance image2.jpg and image2.txt. Only jpg, jpeg, and png are supported currently
# images will automatically be resized and bucketed into the resolution specified
# on windows, escape back slashes with another backslash so
# "C:\\path\\to\\images\\folder"
- caption_ext: "txt"
dataset_path: "/Users/craig/Desktop/crh-training-data/crh"
caption_dropout_rate: 0.05 # will drop out the caption 5% of time
shuffle_tokens: false # shuffle caption order, split by commas
cache_latents_to_disk: true # leave this true unless you know what you're doing
resolution: [ 512, 768, 1024 ] # flux enjoys multiple resolutions
flip_aug: true
num_workers: 0
train:
batch_size: 1
steps: 4000 # total number of steps to train 500 - 4000 is a good range
gradient_accumulation_steps: 1
train_unet: true
train_text_encoder: false # probably won't work with flux
gradient_checkpointing: true # need the on unless you have a ton of vram
noise_scheduler: "flowmatch" # for training only
optimizer: "adamw"
lr: 1.5e-4
# uncomment this to skip the pre training sample
skip_first_sample: true
# uncomment to completely disable sampling
# disable_sampling: true
# uncomment to use new vell curved weighting. Experimental but may produce better results
linear_timesteps: true
# ema will smooth out learning, but could slow it down. Recommended to leave on.
ema_config:
use_ema: true
ema_decay: 0.99
# will probably need this if gpu supports it for flux, other dtypes may not work correctly
dtype: bf16
model:
# huggingface model name or path
name_or_path: "black-forest-labs/FLUX.1-dev"
is_flux: true
quantize: false # run 8bit mixed precision
# low_vram: true # uncomment this if the GPU is connected to your monitors. It will use less vram to quantize, but is slower.
sample:
sampler: "flowmatch" # must match train.noise_scheduler
sample_every: 250 # sample every this many steps
width: 768
height: 768
prompts:
- "Craig Hughes holding a sign that says 'I LOVE PROMPTS!' while a chaotic riot takes place behind him."
- "A man holding a sign that says 'I LOVE PROMPTS!' while a chaotic riot takes place behind him."
- "Craig Hughes sitting at a table playing chess with a determined look on his face. He is playing against a Tyranasaurus who can barely reach the board with his short little arms."
- "A man sitting at a table playing chess with a determined look on his face. He is playing against a Tyranasaurus who can barely reach the board with his short little arms."
- "Craig Hughes standing in a corn field next to a farmer who is riding an old-style cabless tractor. You can see the farmer clearly, and she is wearing a tattered straw hat"
- "A man standing in a corn field next to a farmer who is riding an old-style cabless tractor. You can see the farmer clearly, and she is wearing a tattered straw hat"
- "Craig Hughes talking to a pretty woman in a slinky red dress while they both sip cocktails."
- "A man talking to a pretty woman in a slinky red dress while they both sip cocktails."
neg: "" # not used on flux
seed: 42
walk_seed: true
guidance_scale: 4
sample_steps: 20
# you can add any additional meta info here. [name] is replaced with config name at top
meta:
name: "[name]"
version: '1.0'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment