Ah its my bad, it works well on euler a scheduler, but not at ddim
Oh that’s really interesting!! Thanks for reporting this. Clearly we’ll need to do some more experimentation to work out the best results. AUTOMATIC1111’s repo uses Euler by default I think, right?
Thanks for all your testing and feedback!
I think I found why there’s difference between webui and this repo.
When testing without highlight grammars, there’s same result.
But if there’s highlight grammers like (best quality:1.3), (masterpiece:1.3), (ultra-detailed:1.2), there’s different result. It works well on webui, but not at this repo.
for example at my custom model
prompt “best quality” makes same result
but with prompt (best quality)
web ui result
this repo result
I think difference of how text anaylzed exist. Cause many people use webui as standard, I think it’s better to change as there’s same result with same text at webui. I’m gonna look at how can I change it tomorrow! Thank you
Thanks for posting this, @hihihi! It’s really helpful to see with pics.
You’re right, this type of grammar has no meaning in the standard diffusers pipelines. However, there is a diffusers community pipeline that supports it. I have a flight coming up next week but let’s see if I can try integrate it before then I’ve been wanting to do the same for https://kiri.art/ for some time now so it’s nice to have a push.
@hihihi, something for you to play with tomorrow
P.S., there’s no good way to run AUTOMATIC1111 webui as a serverless process. However, it would be possible to extract parts of it (and / or other SD projects), and it is indeed a path I’ve considered numerous times before. But in the end, diffusers always catches up, and there is much wider development happening there. So I’ve stopped looking into those other solutions and am focusing all my efforts here too, and so far, every time my patience has paid off.
lpw_stable_diffusion pipeline works well!
If there’s chance, I will try other community pipelines too.
It returns slightly different outcome compares to webui, but not a big deal.
Maybe webui uses latent diffusion?
It is message when webui loaded
And there’s DPM ++ 2M Karras scheduler on webui which have great performance, but huggingface diffusers don’t have.
Is there way to add this scheduler on repo?
It is not a important thing, because I can use other schedulers.
Does this repo also work with safetensor instead of ckpt?
It seems safetensor is much faster so I will gonna try safetensor at using custom model.
Great news!! Thanks for reporting back (and so quickly!). It’s fun to squeeze in a new feature before bed and wake up to usage feedback already
Not possible yet without modifying the code (but all you have to do is add the name of the pipeline here in
app.py). This is going to change so that instead of initting all pipelines at load time, they’re only initted (and cached) when they’re first used. I’ll also have better error handling for misspelled / unavailable pipelines, but it’s a little further down the line.
Looks like it does indeed, but not sure where and for what. I see diffusers has latent diffusion support but it’s not specific to the stable diffusion pipelines. Maybe you can look into this more and report back
Unfortunately adding schedulers is quite difficult… but if you manage, I’m sure the whole diffusers community will love you I don’t really understand the differences between all the schedulers, however, there’s a nice comparison here:
and also, did you see the
DPMSolverMultistepScheduler that’s been in diffusers for about two weeks (and works just fine in
docker-diffusers-api)? I’m not sure how exactly or if it’s related to DPM ++ 2M Karras but you get excellent results in just 20 steps!! (same quality as 50 steps on the older schedulers).
Not yet, but I indeed have some stuff planned here! Just wish Banana had on-prem S3-compatible storage. I’m looking forward to see how this compares to their current optimization stuff… the only thing is, there’s no GPU in banana’s build stage (their optimizations step transfers the built docker image to different machines to do the optimization), so we’ll have to get creative here… but I’m up to the challenge
Thank you very much! will test DPMSolverMultistepScheduler!
By the way, I’m building images at banana which works well in gpu server, but optimization doesn’t finished for 6 hours. It seems there’s some problem at banana side now.
Do you experience the same?
DPMSolverMultistepScheduler gets also cool result as I test
Haven’t tried recently but optimization has been a big and constant pain point for me. I plan to experiment with some homegrown alternatives and - if it works - hope we’ll get a way to opt-out of banana’s optimization completely for faster builds. But do make sure you report it on the discord if you haven’t already, and even if others have too… Also, it can be worth trying to push a new dummy commit to trigger a new rebuild; sometimes (but not always) it will just start working again on its own (or after they fix something that didn’t affect existing stuck builds).
A post was split to a new topic: Adieyal/sd-dynamic-prompts: A custom script for AUTOMATIC1111
A post was split to a new topic: KeyError: ‘global_step’ in File “[…]convert_[sd]_to_diffusers.py”, line 799, in global_step = checkpoint[“global_step”]
- Dreambooth models now saved with safetensors
- Loading of safetensors models works great
I still have more work planned with safetensors, and will post more next week, hopefully with timing comparisons too
Thats cool! you’re very awesome. Thank you very much!
My pleasure. If you’ve done any speed tests let us know, I haven’t had a chance yet (but I do have this planned… just working on a few related other fun things ).
Also, I missed it before but in latest
dev commit I’ve set
TENSORS_FAST_GPU=1 which should result in even faster loads.
WARNING: Image Optimization Failed - cold boots may be slow
Very short lived, you were lucky