runtime error

The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] 0it [00:00, ?it/s] Cannot initialize model with low cpu memory usage because `accelerate` was not found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with: ``` pip install accelerate ``` . Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s] Loading pipeline components...: 40%|████ | 2/5 [00:34<00:51, 17.29s/it] Loading pipeline components...: 100%|██████████| 5/5 [00:36<00:00, 5.98s/it] Loading pipeline components...: 100%|██████████| 5/5 [00:36<00:00, 7.34s/it] You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . Traceback (most recent call last): File "/home/user/app/app.py", line 13, in <module> pipe.load_lora_weights(lora_model) File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/lora.py", line 106, in load_lora_weights raise ValueError("PEFT backend is required for this method.") ValueError: PEFT backend is required for this method.

Container logs:

Fetching error logs...