LoRA fine-tuning

Is it possible to determine from a .safetensors file what version of Stable Diffusion was used? This might be a useful feature if that information is unknown or convenient when users are uploading their own files.

Unfortunately not, at least not from a plain .safetensors or .bin file. However, when the LoRA is saved with diffusers, you get an adapter_config.json which stores this as base_model_name_or_path (example) - not sure if that’s in the current diffusers release yet or still being worked on.

Caveats:

  • Could contain a directory name instead of a HuggingFace user/repo id.
  • User might choose to send just the safetensors/bin without the other data.
  • If accepting files from users, make sure to only accept safetensors (as the bin files can contain arbitrary code).

File download code is coming along nicely but still needs a bit more work. Couldn’t publish the current dev because no capacity on Lambda to run the prerequisite integration tests :sweat_smile: HTTP downloads work for single files (so probably the S3 code does too), still need some work on the code to download archives (.tar.zst for diffusers format, etc). More updates soon.

You might like knowing that Banana is running on 40GB GPUs now. They haven’t officially announced it yet though but I am a special agent that infiltrated their enterprise.
So you might be able to run your tests on Banana.

Iiiinteresting. We are very honoured to have such an esteemed special agent in the forums here! :smiley:

That will open up a good bunch of fun possibilities I think. But for the automated tests, it’s nice to have a full system where we can quickly relaunch the container with different environment variables, host our own S3-compatible storage, etc. On the whole it works pretty well, but currently I manually specify the GPU type and geographical location in the script; need to make this more flexible in the future for if the requested system is not available.

In any event;

  1. Bumped diffusers to latest version
  2. Fixed a bug with model downloads that showed up in the automated tests.
  3. Tested S3 lora downloads locally and it indeed works as expected (since it goes through our ā€œstorageā€ library anyways).
  4. Skipping the ā€œarchiveā€ code for now, since I’m not sure that diffusers team has settled on a final format for it yet. However, when I do lora training code, I’ll of course make sure that we can save/load archives of the data without going through HuggingFace.

Automated tests running now (should be done in about 10m but I need to go), and assuming those pass as expected, there’ll be a new :dev release that you experiment with. Hopefully it will work first time, and let me know if any issues / feedback. Thanks! :raised_hands:

A post was split to a new topic: Issues with models from Civitai

Could you demonstrate for us how exactly to load a LoRA model and do inference?

Yeah, sure. I gave an example using test.py above but the JSON equivalent would be:

{
   "callInputs": {
     // Typical, common options, but
     // MODEL_ID should match base model that was fine-tuned with LoRA
     "MODEL_ID": "runwayml/stable-diffusion-v1-5",
     "MODEL_PRECISION": "fp16",
     "MODEL_REVISION": "fp16",
     // Specify the LoRA model
     "attn_procs": "patrickvonplaten/lora_dreambooth_dog_example",
   },
   "modelInputs": {
     "prompt": "A picture of a sks dog in a bucket",
     "seed": 1 // To get same pictures as above,
     // Optional; specify interpolation of LoRA with base model; 0.0 to 1.0 (default)
     "cross_attention_kwargs": { "scale": 0.5 },
   },
}

The ā€œnewā€ options are the attn_procs callInput, and then ability to (optionally) tell the model how to use those weights with cross_attention_kwargs modelInput. It will download the LoRA at runtime (from huggingface in the above example, but an http or s3 URL can be given too for a .bin file (with .safetensors support for diffusers coming soon).

Don’t hesitate to ask about anything that’s not clear so we can get nice, super clear docs for everyone :smiley: This is still a little new for diffusers too so things could change, but currently that’s how it all works.

Meh, it has to be a bin file.
What needs to be done to get my self hosted safetensors to work? Can I help in any way?

We just need the safetensors support from diffusers… shouldn’t need any more changes in docker-diffusers-api (which can already download the necessary files) and it should ā€œjust workā€ as soon they have the support their side and I bump the version.

The PR I linked to previously has the code done, has been approved, but I think is still waiting for final feedback from the team before they merge it. But it looks pretty close, I guess we’re a few days away, give or take :tada:

I was wrong about the above :frowning:

Well, actually, it depends.

  1. :dev release has latest diffusers, with the safetensors support, and a workaround for the regression to load non-safetensors files, BUT:

  2. I couldn’t get it to work with LoRA’s from CivitAI :confused: It seems there are a few different formats LoRA can be in (even in safetensors), and diffusers can’t load it. There are some issues open for this but the direction isn’t clear yet, at least that I could see.

  3. However, I recall your LoRA was trained on a colab somewhere… so it’s possible it might work. Depending on the format. Don’t get your hopes up but its worth a shot.

Important note: it will load as safetensors only if the filename in the URL includes ".safetensors", otherwise you should specify callInput { "attn_procs_from_safetensors": True } to force safetensor loading for other filenames.