Skip to content

Fresh install of beta 280, can't load any LoRA models #351

@jnwebster

Description

@jnwebster

I performed a clean install of https://github.com/rupeshs/fastsdcpu/releases/tag/v1.0.0-beta.280
I downloaded the Fantasy_Classes_SD.safetensors model from https://civitai.com/models/287148/fantasy-classes-sd
I confirmed the model type is LoRA and base model is SD 1.5
I copied the .safetensors file into the lora_models subdirectory
I launched fastsdcpu via ./start-webui.sh
I generated a small test image to initialize the pipeline
I navigated to the "Lora Models" tab on the webui
I see that "Fantasy_Classes_SD" is listed in the available models
I clicked the "Load selected LoRA" button
The console window reports the following errors:

Selected Lora Model :Fantasy_Classes_SD
Lora weight :0.5
LoRA adapter name : Fantasy_Classes_SD
Traceback (most recent call last):
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/diffusers/loaders/peft.py", line 352, in load_lora_adapter
    incompatible_keys = set_peft_model_state_dict(self, state_dict, adapter_name, **peft_kwargs)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/peft/utils/save_and_load.py", line 158, in set_peft_model_state_dict
    load_result = model.load_state_dict(peft_model_state_dict, strict=False)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2624, in load_state_dict
    raise RuntimeError(
RuntimeError: Error(s) in loading state_dict for UNet2DConditionModel:
	size mismatch for down_blocks.0.attentions.0.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 320]).
	size mismatch for down_blocks.0.attentions.0.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([320, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 32]).
	size mismatch for down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for down_blocks.0.attentions.0.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 320]).
	size mismatch for down_blocks.0.attentions.0.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([320, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 32]).
	size mismatch for down_blocks.0.attentions.1.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 320]).
	size mismatch for down_blocks.0.attentions.1.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([320, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 32]).
	size mismatch for down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for down_blocks.0.attentions.1.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 320]).
	size mismatch for down_blocks.0.attentions.1.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([320, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 32]).
	size mismatch for down_blocks.1.attentions.0.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 640]).
	size mismatch for down_blocks.1.attentions.0.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([640, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 32]).
	size mismatch for down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for down_blocks.1.attentions.0.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 640]).
	size mismatch for down_blocks.1.attentions.0.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([640, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 32]).
	size mismatch for down_blocks.1.attentions.1.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 640]).
	size mismatch for down_blocks.1.attentions.1.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([640, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 32]).
	size mismatch for down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for down_blocks.1.attentions.1.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 640]).
	size mismatch for down_blocks.1.attentions.1.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([640, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 32]).
	size mismatch for down_blocks.2.attentions.0.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 1280]).
	size mismatch for down_blocks.2.attentions.0.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([1280, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 32]).
	size mismatch for down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for down_blocks.2.attentions.0.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 1280]).
	size mismatch for down_blocks.2.attentions.0.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([1280, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 32]).
	size mismatch for down_blocks.2.attentions.1.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 1280]).
	size mismatch for down_blocks.2.attentions.1.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([1280, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 32]).
	size mismatch for down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for down_blocks.2.attentions.1.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 1280]).
	size mismatch for down_blocks.2.attentions.1.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([1280, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 32]).
	size mismatch for up_blocks.1.attentions.0.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 1280]).
	size mismatch for up_blocks.1.attentions.0.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([1280, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 32]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.1.attentions.0.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 1280]).
	size mismatch for up_blocks.1.attentions.0.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([1280, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 32]).
	size mismatch for up_blocks.1.attentions.1.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 1280]).
	size mismatch for up_blocks.1.attentions.1.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([1280, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 32]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.1.attentions.1.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 1280]).
	size mismatch for up_blocks.1.attentions.1.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([1280, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 32]).
	size mismatch for up_blocks.1.attentions.2.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 1280]).
	size mismatch for up_blocks.1.attentions.2.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([1280, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 32]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.1.attentions.2.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 1280]).
	size mismatch for up_blocks.1.attentions.2.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([1280, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 32]).
	size mismatch for up_blocks.2.attentions.0.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 640]).
	size mismatch for up_blocks.2.attentions.0.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([640, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 32]).
	size mismatch for up_blocks.2.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.2.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.2.attentions.0.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 640]).
	size mismatch for up_blocks.2.attentions.0.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([640, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 32]).
	size mismatch for up_blocks.2.attentions.1.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 640]).
	size mismatch for up_blocks.2.attentions.1.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([640, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 32]).
	size mismatch for up_blocks.2.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.2.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.2.attentions.1.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 640]).
	size mismatch for up_blocks.2.attentions.1.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([640, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 32]).
	size mismatch for up_blocks.2.attentions.2.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 640]).
	size mismatch for up_blocks.2.attentions.2.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([640, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 32]).
	size mismatch for up_blocks.2.attentions.2.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.2.attentions.2.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.2.attentions.2.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 640]).
	size mismatch for up_blocks.2.attentions.2.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([640, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 32]).
	size mismatch for up_blocks.3.attentions.0.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 320]).
	size mismatch for up_blocks.3.attentions.0.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([320, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 32]).
	size mismatch for up_blocks.3.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.3.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.3.attentions.0.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 320]).
	size mismatch for up_blocks.3.attentions.0.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([320, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 32]).
	size mismatch for up_blocks.3.attentions.1.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 320]).
	size mismatch for up_blocks.3.attentions.1.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([320, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 32]).
	size mismatch for up_blocks.3.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.3.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.3.attentions.1.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 320]).
	size mismatch for up_blocks.3.attentions.1.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([320, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 32]).
	size mismatch for up_blocks.3.attentions.2.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 320]).
	size mismatch for up_blocks.3.attentions.2.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([320, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 32]).
	size mismatch for up_blocks.3.attentions.2.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.3.attentions.2.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for up_blocks.3.attentions.2.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 320]).
	size mismatch for up_blocks.3.attentions.2.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([320, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 32]).
	size mismatch for mid_block.attentions.0.proj_in.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 1280]).
	size mismatch for mid_block.attentions.0.proj_in.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([1280, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 32]).
	size mismatch for mid_block.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for mid_block.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
	size mismatch for mid_block.attentions.0.proj_out.lora_A.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([32, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 1280]).
	size mismatch for mid_block.attentions.0.proj_out.lora_B.Fantasy_Classes_SD.weight: copying a param with shape torch.Size([1280, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 32]).

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/gradio/queueing.py", line 624, in process_events
    response = await route_utils.call_process_api(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/gradio/route_utils.py", line 323, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/gradio/blocks.py", line 2015, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/gradio/blocks.py", line 1562, in call_function
    prediction = await anyio.to_thread.run_sync(  # type: ignore
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/anyio/to_thread.py", line 61, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2525, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 986, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/gradio/utils.py", line 865, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "/home/joel/fastsdcpu/src/frontend/webui/lora_models_ui.py", line 68, in on_click_load_lora
    load_lora_weight(
  File "/home/joel/fastsdcpu/src/backend/lora.py", line 69, in load_lora_weight
    pipeline.load_lora_weights(
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/diffusers/loaders/lora_pipeline.py", line 202, in load_lora_weights
    self.load_lora_into_unet(
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/diffusers/loaders/lora_pipeline.py", line 406, in load_lora_into_unet
    unet.load_lora_adapter(
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/diffusers/loaders/peft.py", line 377, in load_lora_adapter
    module.delete_adapter(adapter_name)
    ^^^^^^^^^^^^^^^^^^^^^
  File "/home/joel/fastsdcpu/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1962, in __getattr__
    raise AttributeError(
AttributeError: 'Linear' object has no attribute 'delete_adapter'

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions