Turns out that training with specific version of XFormers, you get a LoRA that doesn't activate or activates at random, which is why previous versions were a bit iffy. Now with a new card (3060, 12GB) and a polished set of sample images, I managed to train a LoRA that passes all (at least on my end) tests. Trained with 65 images at 1024px so it might produce weird stuff with resolutions below 768.
Turns out that training with specific version of XFormers, you get a LoRA that doesn't activate or activates at random, which is why previous versions were a bit iffy. Now with a new card (3060, 12GB) and a polished set of sample images, I managed to train a LoRA that passes all (at least on my end) tests. Trained with 65 images at 1024px so it might produce weird stuff with resolutions below 768.