[HIDREAM] LoRA
Trained with diffusion-pipe on HiDream-I1-Full
1480 steps on a 4090 over ~7hrs
Previews generated with ComfyUI_examples/hidream/#hidream-dev-workflow
Loading the LoRA with LoraLoaderModelOnly node and using the HiDream DEV fp8 model: hidream_i1_dev_fp8.safetensors
config.toml
# Dataset config file. output_dir = '/mnt/d/hidream/training_output' dataset = 'dataset-hidream.toml' # Training settings epochs = 50 micro_batch_size_per_gpu = 1 pipeline_stages = 1 gradient_accumulation_steps = 4 gradient_clipping = 1.0 warmup_steps = 100 blocks_to_swap = 20 # eval settings eval_every_n_epochs = 5 eval_before_first_step = true eval_micro_batch_size_per_gpu = 1 eval_gradient_accumulation_steps = 1 # misc settings save_every_n_epochs = 10 checkpoint_every_n_minutes = 30 activation_checkpointing = true partition_method = 'parameters' save_dtype = 'bfloat16' caching_batch_size = 1 steps_per_print = 1 video_clip_mode = 'single_beginning' [model] type = 'hidream' diffusers_path = '../hidream-full' llama3_path = '../llama-3.1' llama3_4bit = true dtype = 'bfloat16' transformer_dtype = 'nf4' max_llama3_sequence_length = 256 flux_shift = true [adapter] type = 'lora' rank = 32 dtype = 'bfloat16' [optimizer] type = 'adamw_optimi' lr = 5e-5 betas = [0.9, 0.99] weight_decay = 0.02 eps = 1e-8
dataset.toml
# Resolution settings. resolutions = [1024] # Aspect ratio bucketing settings enable_ar_bucket = true min_ar = 0.5 max_ar = 2.0 num_ar_buckets = 7 [[directory]] # IMAGES # Path to the directory containing images and their corresponding caption files. path = '/mnt/d/huanvideo/training_data/images' num_repeats = 5 resolutions = [1024]
[HIDREAM] LoRA
Trained with diffusion-pipe on HiDream-I1-Full
1480 steps on a 4090 over ~7hrs
Previews generated with ComfyUI_examples/hidream/#hidream-dev-workflow
Loading the LoRA with LoraLoaderModelOnly node and using the HiDream DEV fp8 model: hidream_i1_dev_fp8.safetensors
config.toml
dataset.toml