Repeats: 10
Epoch: 5
Image Count: 50
Network Rank (Dimension): 8
Network Alpha: 4
Batch size: 1
Clip skip: 1 (None)
Training steps: 2500
Resolution: 1024x768
Train Resolution: 1024,1024
Model: noobaiXLNAIXL_epsilonPred10Version
LR Scheduler: cosine_with_restarts
Optimizer: AdamW8bit
Learning rate: 0,0005
Text Encoder learning rate: 0,00005
Unet learning rate: 0,0005
Captioning: WD14 Captioning (SmilingWolf/wd-v1-4-moat-tagger-v2) + Editing
Recommended LoRA Strength (Weight): 0.8-0.9 (Maybe sometimes 1) Especially without other LoRA
Repeats: 10
Epoch: 5
Image Count: 50
Network Rank (Dimension): 8
Network Alpha: 4
Batch size: 1
Clip skip: 1 (None)
Training steps: 2500
Resolution: 1024x768
Train Resolution: 1024,1024
Model: noobaiXLNAIXL_epsilonPred10Version
LR Scheduler: cosine_with_restarts
Optimizer: AdamW8bit
Learning rate: 0,0005
Text Encoder learning rate: 0,00005
Unet learning rate: 0,0005
Captioning: WD14 Captioning (SmilingWolf/wd-v1-4-moat-tagger-v2) + Editing
Recommended LoRA Strength (Weight): 0.8-0.9 (Maybe sometimes 1) Especially without other LoRA