How to use the tar archive of trained model in AUTOMATIC1111

I would like to see if my train model makes the same result as my other training made in colab so for that, I want to use the same interface as I did in colab.
if I put in automatic1111 model the tar, it doesn’t load and if I decompress it the result of any prompt look noise

Hope someone else can chime in here since I don’t use AUTOMATIC1111. Traditionally you had to convert diffusers back to a .ckpt file with convert_diffusers_to_original_stable_diffusion.py, but I think I read somewhere that they support direct use of diffuser’s models now (and with safetensors!).

I’m wondering if maybe it is loading and there’s just something wrong with your fine-tuned model. One easy’ish way to check would be to run in docker-diffusers-api again and see if the results are the same. What are the modelInputs and callInputs did you use to fine-tune the model?

Anyway, hopefully someone with more experience with AUTOMATIC1111 can chime in.

Automatic1111 uses .safetensors or .ckpt so unarchiving the tar is probably the right idea. What format does the tar unarchive to? The noise results are likely due to misconfiguration. Did you train from a v1.x or v2.x model? If you upload it to huggingface and provide a link ill be happy to try it out

1 Like

yes safe tensor work, the point is when i décompress automatic found in 2 folder file it can use. in unet and other one.
any of them i try end up with just noise, so i believe the problem is config.
with this same model i get result in the docker.
i wish be able to share it but the model is trained with my face.
i share here the settings soon
Model

{
  "seed": -1,
  "width": 512,
  "height": 512,
  "scale_lr": true,
  "adam_beta1": 1,
  "adam_beta2": 1,
  "local_rank": 1,
  "resolution": 512,
  "adam_epsilon": 1,
  "lr_scheduler": "constant",
  "learning_rate": 5000000,
  "max_grad_norm": 1,
  "use_8bit_adam": true,
  "guidance_scale": 2,
  "instance_prompt": "a couple photo of sks",
  "lr_warmup_steps": 50,
  "max_train_steps": 30,
  "mixed_precision": "no",
  "num_class_images": 100,
  "num_train_epochs": 10,
  "adam_weight_decay": 1,
  "prior_loss_weight": 1,
  "train_text_encoder": true,
  "num_inference_steps": 40,
  "gradient_checkpointing": false,
  "with_prior_preservation": true,
  "gradient_accumulation_steps": 20
}

Call

{
  "MODEL_ID": "stable-diffusion-2-1-base",
  "MODEL_URL": "https://pub-bdad4fdd97ac4830945e90ed16298864.r2.dev/diffusers/models--stabilityai--stable-diffusion-2-1-base.tar.zst"
}

And then in inference:
Model

{
  "seed": -1,
  "width": 512,
  "height": 512,
  "prompt": "A gorgeous sks realistic photo, insanely accurate faces, couple photo, romantic, high-quality studio professional photography, elegant Christmas decorated homey background, cinematic lights, detailed perfect eyes, 8 k high definition, insanely detailed, elegant, Santa hats, peach color lips",
  "guidance_scale": 4,
  "negative_prompt": "tiling, multiple photos, red lips, low resolution, watermarks, watermark, text, letters, baby, disfigured, deformed, poorly drawn, extra limbs, blurry, mutated hands, ugly, mutilated, extra fingers, bad anatomy, malformed, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck",
  "num_inference_steps": 140,
  "num_images_per_prompt": 10
}

and in call

{
  "MODEL_ID": "holidayai",
  "PIPELINE": "PNDMScheduler",
  "SCHEDULER": "KarrasVePipeline",
  "custom_pipeline_method": "text2img",
  "xformers_memory_efficient_attention": true
}
1 Like