I’ve delved deeper into the various methods of finetuning SD lately which lead to .ckpt merging.
It’s a lot of fun experimenting with it.
Here’s some results from merging:
jinofcoolnes/sammod (which is already a merge of three similar models)
The first section is the base models for each using the trained words (tag)
The second section is the results from merging with inkpunk as primary, sammod as secondary and robo as tertiary with automatic1111 default settings.
The third section is the results from merging with sammod as primary, inkpunk as secondary and robo as tertiary with automatic1111 default settings.
It seems when merging the trained words (tag) persist through the merge which I was surprised and happy about.
I’m working on basic repo to get a Huggingface space powered by docker-diffusers-api to share some fintunes and merges. Hugginface’s current cheapest GPU plan for a space is $0.60/hr for the “Small T4” plan so I think the docker-diffusers-api could be very useful to the finetune community.
Awesome, @coffeeorgreentea! Thanks so much for sharing this, love seeing what people are up to in the community and especially how to make docker-diffusers-api more useful for certain communities.
At the end of the day, docker-diffusers-api is just a fancy wrapper around diffusers. So I guess for “native” checkpoint merging that doesn’t require conversion to/from .ckpt everytime and external apps, we need it in the library. Your timing is phenomenal as it looks like there’s great headway happening in this area right now:
I think as soon as their PR comes in, I can add support to docker-diffusers-api quite quickly, time allowing… or you could if you’re comfortable doing so (at the very least though, I’d like to design a general way to use community pipelines).
In any event, please keep us posted with your experiment results. It’s great to keep up with all the cool stuff everyone is doing and thanks for posting them here on the forums where it’s a little easier to keep track of single topics