Enjoy a bunch of stuff that has been working great in dev for a while ![]()
-
2022-11-29
-
Diffusers v0.9.0, Stable Diffusion v2.0. Models:
"stabilityai/stable-diffusion-2"- trained on 768x768"stabilityai/stable-diffusion-2-base"- trained on 512x512"stabilityai/stable-diffusion-2-inpainting"- untested""stabilityai/stable-diffusion-x4-upscaler"- untested
-
DPMSolverMultistepScheduler. Docker-diffusers-api is simply a wrapper
around diffusers. We support all the included schedulers out of the box,
as long as they can init themselves with default arguments. So, the above
scheduler was already working, but we didn’t mention it before. I’ll just
quote diffusers:DPMSolverMultistepScheduler is the firecracker diffusers implementation
of DPM-Solver++, a state-of-the-art scheduler that was contributed by one
of the authors of the paper. This scheduler is able to achieve great
quality in as few as 20 steps. It’s a drop-in replacement for the default
Stable Diffusion scheduler, so you can use it to essentially half
generation times. -
Storage Class / S3 support. We now have a generic storage class, which
allows for special URLs anywhere anywhere you can usually specify a URL,
e.g.CHECKPOINT_URL,dest_url(after dreambooth training), and the new
MODEL_URL(see below). URLs like “s3:///bucket/filename” will work how
you expect, but definitely read docs/storage.md
to understand the format better. Note in particular the triple forwardslash
(“///”) in the beginning to use the default S3 endpoint. -
Dreambooth training, working but still in development. See
this forum post
for more info. -
PRECISIONbuild var, defaults to"fp16", set to""to use the model
defaults (generally fp32). -
CHECKPOINT_URLconversion:- Crash / stop build if conversion fails (rather than unclear errors later on)
- Force
cpuloading even for models that would otherwise default to GPU.
This fixes certain models that previously crashed in build stage (where GPU
is not available). --extract-emaon conversion since these are the more important weights for
inference.CHECKPOINT_CONFIG_URLnow let’s to specify a specific config file for
conversion, to use instead of SD’s defaultv1-inference.yaml.
-
MODEL_URL. If your model is already in diffusers format, but you don’t
host it on HuggingFace, you can now have it downloaded at build time. At
this stage, it should be a.tar.zstfile. This is an alternative to
CHECKPOINT_URLwhich downloads a.ckptfile and converts to diffusers. -
test.py:- New
--bananaarg to run the test on banana. Set environment variables
BANANA_API_KEYandBANANA_MODEL_KEYfirst. - You can now add to and override a test’s default json payload with:
--model-arg prompt="hello"--call-arg MODEL_ID="my-model"
- Support for extra timing data (e.g. dreambooth sends
train
anduploadtimings). - Quit after inference errors, don’t keep looping.
- New
-
Dev: better caching solution. No more unruly
root-cachedirectory. See
CONTRIBUTING.md for more info.
-
While the above has been working great for me and others for a while now, main branch enjoys wider use, so please do report anything unusual after upgrading! ![]()