Score:5

installing and running Stable Diffusion on 64-bit Ubuntu 22.04 LTS

ua flag

https://github.com/CompVis/stable-diffusion/

Does anyone have this working on 64-bit Ubuntu 22.04 LTS? Could you share steps on how to get it working, or just link to a known tested/working guide for same?

ru flag
Have you tried following their instructions? Which is clone the repository, the Conda dependencies, and then created the Conda environment per the git repository's readme?
Adam Monsen avatar
ua flag
Yes, for an hour or so a day for the last few days. :-) I was kinda looking for the "easy button". This is a daunting install. I'm not even sure if I have a GPU that works. I saw mention of CPU-only usage, but, seriously, this is way harder to just get going than your average `apt install`. Or maybe it's just that the docs are not well-written? I'll keep trying and report back...
Invention1 avatar
us flag
So far for me, the Conda environment coughs up an error that I can't debug, and I'm kinda done right there before even getting started on Stable Diffusion. This installation is not for the faint-of-terminal
Score:6
ua flag

Got it. I'll write it up in case it helps another. This will initially only cover CPU "sampling" (generating an image) until I get GPU sampling working. Sampling should run entirely offline.

install with pip

  1. pip install --upgrade diffusers transformers scipy torch
  2. sudo apt install git-lfs
  3. clone the git repository at https://huggingface.co/runwayml/stable-diffusion-v1-5 (you have to log in or sign up first and accept their license agreement)

Then you can create a small Python script (inside your local working copy of the cloned git repo above) and run it to try sampling for yourself:

from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained('.')
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]  
image.save("astronaut_rides_horse.png")

easier and better method

https://github.com/invoke-ai/InvokeAI#installation

This provides a really nice web GUI, too.

onboard GPU note

My GPU shows up as Intel CometLake-S GT2 [UHD Graphics 630] from lspci | grep VGA or neofetch. screenfetch calls it Mesa Intel(R) UHD Graphics 630 (CML GT2). Either way I don't know how to use this GPU for sampling (or if it is even possible).

Score:3
vn flag

From your comment:

I was kinda looking for the "easy button"

A fairly large portion (probably a majority) of Stable Diffusion users currently use a local installation of the AUTOMATIC1111 web-UI. There's an installation script that also serves as the primary launch mechanism (performs Git updates on each launch):

sudo apt install wget git python3 python3-venv # system dependencies
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
# Download model file(s) (e.g. Hugging Face account, which does require an account and login) and install into `models/Stable-Diffusion` subdirectory

Full dependencies and optional dependencies are on this page.

This particular repository seems to move at a lightning pace, and has quickly added a number of features with documentation.

Snowcrash avatar
mm flag
There's never an 'easy' button when it comes to Python. e.g. the above gets to `################################################################ Launching launch.py... ################################################################ Python 3.10.6 (main, Nov 2 2022, 18:53:38) [GCC 11.3.0] Commit hash: 828438b4a190759807f9054932cae3a8b880ddf1 Installing torch and torchvision` and fails with `RuntimeError: Error running command.`.
Score:0
ro flag
jan

Late to the party, but Stable Diffusion 2.1 (base) is just as simple:

from PIL import Image
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
import torch

prompt = "Smurf village in summer"
iterations = 100

model_id = "stabilityai/stable-diffusion-2-1-base"

print("Loading model")
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, 
torch_dtype=torch.float16)
pipe = pipe.to("cuda")

image = pipe(prompt, guidance_scale=9, num_inference_steps=iterations).images[0]
image.save("output.jpg", 'JPEG', quality=70)
print("Image saved as output.jpg")

Works like a charm for me, give this a try. Same requirements as stated above:

pip install pillow diffusers torch

I wrote a little prose around this here, but essentially this is what you need to run.

Hope this helps!

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.