These images are straight out of txt2img. No inpainting or editing.
This model is very easy to use. I have used it to create several different pieces. It can be very difficult to get it to replicate the same result twice. I have made it very flexible.
I add it to the network prompt and then call "scary alien" in the positive prompt.
I usually keep my CFG scale at around 5 and use "hires fix". I keep the denoising strength at about 0.5
I get the best results using the epicphotogasm_lastunicorn checkpoint.
I use very simple prompts ie: "a scary alien dinosaur with red scales at night, open mouth". I would just start out with scary alien and see what you get and then go from there. with epicphotogasm_lastunicorn I have occasionally had to use negative prompts such as "nipples, breasts, tits, human...etc". That checkpoint tends to add human traits to the image. However if you add a second descriptor such as "scary alien lizard" or "scary alien hiding behind a dumpster", it tends to do much better.
These images are straight out of txt2img. No inpainting or editing.
This model is very easy to use. I have used it to create several different pieces. It can be very difficult to get it to replicate the same result twice. I have made it very flexible.
I add it to the network prompt and then call "scary alien" in the positive prompt.
I usually keep my CFG scale at around 5 and use "hires fix". I keep the denoising strength at about 0.5
I get the best results using the epicphotogasm_lastunicorn checkpoint.
I use very simple prompts ie: "a scary alien dinosaur with red scales at night, open mouth". I would just start out with scary alien and see what you get and then go from there. with epicphotogasm_lastunicorn I have occasionally had to use negative prompts such as "nipples, breasts, tits, human...etc". That checkpoint tends to add human traits to the image. However if you add a second descriptor such as "scary alien lizard" or "scary alien hiding behind a dumpster", it tends to do much better.