Just like the Automatic 1111 Web UI, Auto 1111 SDK allows you to pass in custom arguments to the SDK. When you initialize a pipeline, you must set the specific arguments/flags you want for that pipeline in the constructor. You can view all the flags that A1111 supports here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings#all-command-line-arguments. These flags become extremely useful when using a float 16 VAE in SDXL (see docs for example). For example, if I wanted to make the generation use medvram, I would do:
from auto1111sdk import StableDiffusionPipeline
pipe = StableDiffusionPipeline("model.safetensors", default_command_args = "--medvram")
You can do this for any of the pipelines in Auto 1111 SDK:
from auto1111sdk import StableDiffusionPipeline, EsrganPipeline, RealEsrganPipeline, StableDiffusionXLPipeline
pipe = StableDiffusionXLPipeline("model.safetensors", default_command_args = "--medvram")
upscaler_1 = EsrganPipeline("upscaler.pth", default_command_args = "<anything flag you want here>")
upscaler_1 = RealEsrganPipeline("upscaler2.pth", default_command_args = "<anything flag you want here>")
By default, Auto 1111 SDK sets the following flags:
## For StableDiffusionPipeline, EsrganPipeline, RealEsrganPipeline
if default_command_args is None:
if torch.cuda.is_available():
os.environ['COMMANDLINE_ARGS'] = "--upcast-sampling --skip-torch-cuda-test --no-half-vae interrogate"
elif torch.backends.mps.is_available():
os.environ['COMMANDLINE_ARGS'] = "--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate"
else:
os.environ['COMMANDLINE_ARGS'] = "--skip-torch-cuda-test --no-half-vae --no-half interrogate"
else:
os.environ['COMMANDLINE_ARGS'] = default_command_args
And for Stable Diffusion XL Pipeline, Auto 1111 SDK sets the following flags by default:
if default_command_args is None:
if torch.cuda.is_available():
os.environ['COMMANDLINE_ARGS'] = "--no-half-vae --no-half --medvram"
# os.environ['COMMANDLINE_ARGS'] = "--skip-torch-cuda-test --medvram" # makes it faster with fp16 vae
elif torch.backends.mps.is_available():
os.environ['COMMANDLINE_ARGS'] = "--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate"
else:
os.environ['COMMANDLINE_ARGS'] = "--skip-torch-cuda-test --no-half-vae --no-half interrogate"
else:
os.environ['COMMANDLINE_ARGS'] = default_command_args