# Passing in custom arguments

Just like the Automatic 1111 Web UI, Auto 1111 SDK allows you to pass in custom arguments to the SDK. When you initialize a pipeline, you must set the specific arguments/flags you want for that pipeline in the constructor. You can view all the flags that A1111 supports here: <https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings#all-command-line-arguments>. These flags become extremely useful when using a float 16 VAE in SDXL (see docs for example). For example, if I wanted to make the generation use medvram, I would do:

```python
from auto1111sdk import StableDiffusionPipeline

pipe = StableDiffusionPipeline("model.safetensors", default_command_args = "--medvram")
```

You can do this for any of the pipelines in Auto 1111 SDK:

```python
from auto1111sdk import StableDiffusionPipeline, EsrganPipeline, RealEsrganPipeline, StableDiffusionXLPipeline

pipe = StableDiffusionXLPipeline("model.safetensors", default_command_args = "--medvram")
upscaler_1 = EsrganPipeline("upscaler.pth", default_command_args = "<anything flag you want here>")
upscaler_1 = RealEsrganPipeline("upscaler2.pth", default_command_args = "<anything flag you want here>")

```

By default, Auto 1111 SDK sets the following flags:

<pre class="language-python"><code class="lang-python"><strong>## For StableDiffusionPipeline, EsrganPipeline, RealEsrganPipeline
</strong><strong>if default_command_args is None:
</strong>    if torch.cuda.is_available():
        os.environ['COMMANDLINE_ARGS'] = "--upcast-sampling --skip-torch-cuda-test --no-half-vae interrogate"
    elif torch.backends.mps.is_available():
        os.environ['COMMANDLINE_ARGS'] = "--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate"
    else:
        os.environ['COMMANDLINE_ARGS'] = "--skip-torch-cuda-test --no-half-vae --no-half interrogate"
else:
    os.environ['COMMANDLINE_ARGS'] = default_command_args
</code></pre>

And for Stable Diffusion XL Pipeline, Auto 1111 SDK sets the following flags by default:

```python
if default_command_args is None:
    if torch.cuda.is_available():
        os.environ['COMMANDLINE_ARGS'] = "--no-half-vae --no-half --medvram"
#       os.environ['COMMANDLINE_ARGS'] = "--skip-torch-cuda-test --medvram" # makes it faster with fp16 vae
    elif torch.backends.mps.is_available():
        os.environ['COMMANDLINE_ARGS'] = "--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate"
    else:
        os.environ['COMMANDLINE_ARGS'] = "--skip-torch-cuda-test --no-half-vae --no-half interrogate"
else:
    os.environ['COMMANDLINE_ARGS'] = default_command_args
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://flush-ai.gitbook.io/automatic-1111-sdk/pipelines/passing-in-custom-arguments.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
