################################################################ Install script for stable-diffusion + Web UI Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer. ################################################################ ################################################################ Running on sduser user ################################################################ ################################################################ Repo already cloned, using it as install directory ################################################################ ################################################################ Create and activate python venv ################################################################ ################################################################ Launching launch.py... ################################################################ Python 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] Version: v1.10.1 Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2 Installing xformers Traceback (most recent call last): File "/app/stable-diffusion-webui/launch.py", line 48, in <module> main() File "/app/stable-diffusion-webui/launch.py", line 39, in main prepare_environment() File "/app/stable-diffusion-webui/modules/launch_utils.py", line 402, in prepare_environment run_pip(f"install -U -I --no-deps {xformers_package}", "xformers") File "/app/stable-diffusion-webui/modules/launch_utils.py", line 144, in run_pip return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live) File "/app/stable-diffusion-webui/modules/launch_utils.py", line 116, in run raise RuntimeError("\n".join(error_bits)) RuntimeError: Couldn't install xformers. Command: "/app/stable-diffusion-webui/venv/bin/python" -m pip install -U -I --no-deps xformers==0.0.23.post1 --prefer-binary Error code: 1 stdout: Collecting xformers==0.0.23.post1 Downloading xformers-0.0.23.post1-cp310-cp310-manylinux2014_x86_64.whl.metadata (1.0 kB) Downloading xformers-0.0.23.post1-cp310-cp310-manylinux2014_x86_64.whl (213.0 MB) 0.0/213.0 MB ? eta -:--:-- stderr: WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', TimeoutError('_ssl.c:990: The handshake operation timed out'))': /simple/xformers/ WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', ConnectionResetError(104, 'Connection reset by peer'))': /packages/f4/89/ce8e936d3e64b3b565c16312dd6446d54f6e485f864130702c6b3b3cbe7c/xformers-0.0.23.post1-cp310-cp310-manylinux2014_x86_64.whl.metadata WARNING: Connection timed out while downloading. error: incomplete-download × Download failed because not enough bytes were received (0 bytes/213.0 MB) ╰─> URL: https://files.pythonhosted.org/packages/f4/89/ce8e936d3e64b3b565c16312dd6446d54f6e485f864130702c6b3b3cbe7c/xformers-0.0.23.post1-cp310-cp310-manylinux2014_x86_64.whl note: This is an issue with network connectivity, not pip. hint: Consider using --resume-retries to enable download resumption. ~/stable-diffusion-webui
--- 0%| | 0/20 [00:00<?, ?it/s] *** Error completing request *** Arguments: ('task(wan01r2cq08hz6h)', <gradio.routes.Request object at 0x7f43ce887850>, '11111', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/app/modules/call_queue.py", line 74, in f res = list(func(*args, **kwargs))
...
File "/opt/conda/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 63, in _run_priority_list raise NotImplementedError(msg)
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(2, 4096, 8, 40) (torch.float16) key : shape=(2, 4096, 8, 40) (torch.float16) value : shape=(2, 4096, 8, 40) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0 `decoderF` is not supported because: xFormers wasn't build with CUDA support attn_bias type is <class 'NoneType'> operator wasn't built - see `python -m xformers.info` for more info `[email protected]` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `tritonflashattF` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info triton is not available Only work on pre-MLIR triton for now `cutlassF` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `smallkF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 40
To create a public link, set `share=True` in `launch()`. Startup time: 50.9s (prepare environment: 41.7s, import torch: 3.8s, import gradio: 1.2s, setup paths: 1.6s, initialize shared: 0.5s, other imports: 0.5s, load scripts: 0.7s, create ui: 0.7s, gradio launch: 0.1s). 6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa2 Loading weights [6ce0161689] from /app/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors Creating model from config: /app/configs/v1-inference.yaml /opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:945: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. warnings.warn( Applying attention optimization: xformers... done. Model loaded in 17.7s (calculate hash: 12.5s, create model: 2.8s, apply weights to model: 1.9s, calculate empty prompt: 0.2s). 100%|██████████| 20/20 [00:01<00:00, 12.15it/s] Total progress: 100%|██████████| 20/20 [00:01<00:00, 15.26it/s] Total progress: 100%|██████████| 20/20 [00:01<00:00, 18.42it/s]