Repeats: 20
Epoch: 10
Image Count: 12
Network Rank (Dimension): 16
Network Alpha: 8
Batch size: 1
Clip skip: 1
Training steps: 2400
Resolution: 576x832
Train Resolution: 1024,1024
LR Scheduler: cosine_with_restarts
Optimizer: AdamW8bit
Learning rate: 0,0005
Text Encoder learning rate: 0,00005
Unet learning rate: 0,0005
Captioning: WD14 Captioning (SmilingWolf/wd-v1-4-moat-tagger-v2) + Edit
Recommended LoRA Strength (Weight): 0.7-0.8
Repeats: 20
Epoch: 10
Image Count: 12
Network Rank (Dimension): 16
Network Alpha: 8
Batch size: 1
Clip skip: 1
Training steps: 2400
Resolution: 576x832
Train Resolution: 1024,1024
LR Scheduler: cosine_with_restarts
Optimizer: AdamW8bit
Learning rate: 0,0005
Text Encoder learning rate: 0,00005
Unet learning rate: 0,0005
Captioning: WD14 Captioning (SmilingWolf/wd-v1-4-moat-tagger-v2) + Edit
Recommended LoRA Strength (Weight): 0.7-0.8