Stable Diffusion XL
With Auto 1111 SDK, using Stable Diffusion XL is super easy. We have a dedicated pipeline for it because it requires different flags to be set than the regular Stable Diffusion Pipeline.
First Download a dedicated Stable Diffusion XL safetensors checkpoint from Civit AI. You can use Auto 1111 SDK's dedicated civit downloader to do this:
Import the Stable Diffusion XL pipeline and run the generation:
One of the problems with these safetensor files and the original Automatic 1111 Web UI is that SDXL is only natively supported using float32 data type. This makes the model/generation extremely memory intensive and long, even on powerful systems. To combat this, Auto 1111 SDK offers a way to set a custom float16 VAE and set custom flags. Here's how:
Go to this Huggingface model page and download the float 16 VAE file: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
Define the custom pipe with the following flags:
Set the custom VAE file you downloaded
Now, the generation time has significantly reduced from 1 minute 10 seconds to just 22 seconds!. Furthermore, the memory/VRAM usage has been significantly reduced! The generation parameters set above generated the below output:
Last updated