There are detailed changes below the next paragraph, you might need to expand this version changes box!
Experimented with quite a few changes to the training settings and training set. A few of them seem to have stuck, leading to a new version that seems mostly better to me than the last. So here you go. Seems I just can't help myself.
Most of the changes (look at LoRA metadata for more details):
Updated to much newer Kohya scripts version (~3 months newer)
Updated training images
Removed some lower quality images, added some new others
Switched to keep original-ish aspect ratios (slightly cropped and resized to compatible SD 1.5 resolutions) where it makes sense instead of forcing squares
Added regularization images to actually make trigger tag work correctly
1 regularization image per training image
Generated by the base model at the same resolution, with the same tags (minus the activation tag)
Currently at half loss during training as they had a bit too much influence otherwise
With this, the LoRA now reverts back much closer to base model knowledge without the activation tag (which is correct!)
Might care a bit more about the positioning of the activation tag now as it was trained with "keep tokens" to keep the activation token at the front when shuffling
Normalized repeats to 1 (only using different values if ever in need of balancing datasets) and learning rates back to defaults
Compensated with different epoch settings
Reverted back to training at 128 dims and a resize down to 32
Results were better across the board and the resizing also removes a bit of noise as a bonus
Used NO dynamic resize method as results for sv_fro@0.99 and sv_ratio@20 did not seem too different from or better than a simple resizing
Added training warmup of 10%
Not sure if this had much impact, might remove again in the future
Added "scale weight norms" with value 1 during training
Supposedly helps against overfitting and might make LoRAs more compatible with others
After initial release: Used FreeU extension for example image generation to further improve results
Well, did not expect to do another version for this. At all. But as it turns out, I do like to experiment with completely useless things sometimes, taking up a lot of my free time for some reason. Maybe some of my other existing LoRAs may follow yet again, now that I have new settings. Or maybe not since I need to update the training sets and don't have quite as much interest in most of the others. We'll see, we'll see!
There are detailed changes below the next paragraph, you might need to expand this version changes box!
Experimented with quite a few changes to the training settings and training set. A few of them seem to have stuck, leading to a new version that seems mostly better to me than the last. So here you go. Seems I just can't help myself.
Most of the changes (look at LoRA metadata for more details):
Updated to much newer Kohya scripts version (~3 months newer)
Updated training images
Removed some lower quality images, added some new others
Switched to keep original-ish aspect ratios (slightly cropped and resized to compatible SD 1.5 resolutions) where it makes sense instead of forcing squares
Added regularization images to actually make trigger tag work correctly
1 regularization image per training image
Generated by the base model at the same resolution, with the same tags (minus the activation tag)
Currently at half loss during training as they had a bit too much influence otherwise
With this, the LoRA now reverts back much closer to base model knowledge without the activation tag (which is correct!)
Might care a bit more about the positioning of the activation tag now as it was trained with "keep tokens" to keep the activation token at the front when shuffling
Normalized repeats to 1 (only using different values if ever in need of balancing datasets) and learning rates back to defaults
Compensated with different epoch settings
Reverted back to training at 128 dims and a resize down to 32
Results were better across the board and the resizing also removes a bit of noise as a bonus
Used NO dynamic resize method as results for sv_fro@0.99 and sv_ratio@20 did not seem too different from or better than a simple resizing
Added training warmup of 10%
Not sure if this had much impact, might remove again in the future
Added "scale weight norms" with value 1 during training
Supposedly helps against overfitting and might make LoRAs more compatible with others
After initial release: Used FreeU extension for example image generation to further improve results
Recommended weight: 1
Training model: Anything V3
Example image generation model: AbyssOrangeMix2 - Hardcore
Well, did not expect to do another version for this. At all. But as it turns out, I do like to experiment with completely useless things sometimes, taking up a lot of my free time for some reason. Maybe some of my other existing LoRAs may follow yet again, now that I have new settings. Or maybe not since I need to update the training sets and don't have quite as much interest in most of the others. We'll see, we'll see!