![]() If performance is poor (if it takes more than a minute to generate a 512x512 image with 20 steps with any sampler) first try starting with the -opt-split-attention-v1 command line option (i.e./webui.sh -opt-split-attention-v1) and see if that helps. Poor Performance:Ĭurrently GPU acceleration on macOS uses a lot of memory. webui.sh, try deleting the repositories and venv folders from your stable-diffusion-webui folder and then update web UI with git pull before running. If you encounter errors when trying to start the Web UI with. Note that this model works at image dimensions of 512 width/height or higher instead of 768. yaml extension in the same way as mentioned above and put it in stable-diffusion-webui/models/Stable-diffusion along with the model. Download the v2-midas-inference.yaml configuration file by holding down option on the keyboard and clicking here, then rename it with the. if you downloaded the 768-v-ema.ckpt model, rename it to 768-v-ema.yaml and put it in stable-diffusion-webui/models/Stable-diffusion along with the model).Īlso available is a Stable Diffusion 2.0 depth model ( 512-depth-ema.ckpt). ckpt, press return on the keyboard (confirm changing the file extension if prompted), and place it in the same folder as the model (e.g. In the window that appears select the filename and change it to the filename of the model, except with the file extension. In Finder select that file then go to the menu and select File > Get Info.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |