hello friends! new(ish)!
Stable Diffusion
Stable Diffusion is an open-source diffusion model for generating images from textual descriptions. Note: as of writing there is rapid development both on the software and user side. Take everything you read here with a grain of salt.
How to Use
Usage instructions for both online and local use.
Getting started
- beta.dreamstudio.ai: official web service
- Official Github page
- ULTIMATE GUI RETARD GUIDE: step-by-step instructions for running Stable Diffusion on Windows with the newest features.
- basujindal fork: fork that uses less VRAM at the cost of speed
- waifu-diffusion fork: fork that ???
gradio
gradio is a graphical user interface for generating images locally with Stable Diffusion. A short explanation of what the options for txt2img do:
- Prompt: textual description of what you want to generate.
- Sampling Steps: diffusion algorithms work by making small steps from random noise towards an image that fits the prompt. This is how many such steps should be done. Diminishing returns.
- Sampler: which sampling algorithm to use, use k-diffusion if you're unsure.
- Skip sample save: when ticked, do not save individual images to disk.
- Skip grid save: when ticked, do not save a grid of all images at the end.
- Increment seed: when ticked, explicitly set the seed with each generation iteration. This makes it possible to recreate a specific image that you encounter in a larger run.
- DDIM ETA: amount of randomness when using DDIM.
- Sampling Iterations: how often to generate a set of images.
- Samples Per Iteration: how many images to generate at the same time. Increasing this value can improve performance but you also need more VRAM. Total number of images is this multiplied with Sampling Iterations.
- Classifier-free Guidance Scale: how strong the images match your prompt. Increasing this value will result in images that resemble your prompt more closely (according to the model) but it also degrades image quality after a certain point.
- Seed: starting point for RNG. Keep this the same to generate the same (or almost the same) images multiple times.
- Width: width of individual images in pixel. To increase this value you need more VRAM. Image coherence on large scales becomes worse as the resolution increases.
- Height: same as Width but for individual image height. The aspect ratio influences the content of generated images; if height is higher than width you get for example more portraits, while you get more landscapes if width is higher than height.
Example Prompts
Baseline prompt for a photorealistic drawing of the face of a conventionally attractive woman:
thick lips, black hair, fantasy background, cinematic lighting, highly detailed, sharp focus, digital painting, art by junji ito and WLOP, professional photoshoot, instagram
Prompt Design
Guidelines for creating better prompts.
What To Write
Write text that would be likely to accompany the image you want. Typically this means that the text should simply describe the image. But this is only half of the process because a description is determined not just by the image but also the person writing the description.
Imagine for a moment that you were Chinese and had to describe the image of a person. Your word of choice would likely no longer be "person" because your native language would be Chinese and that is not how you would describe a person in Chinese. You wouldn't even use Latin characters to describe the image because the Chinese writing system is completely different. At the same time, the images of people that you would be likely to see would be categorically different; if you were Chinese you would primarily see images of other Chinese people. In this way the language, the way something is said, is connected to the content of images. Two terms that theoretically describe the same thing can be associated with very different images and any model trained on these images will implicitly learn these associations. This is very typical of natural language where there are many synonymous terms with very different nuances; just consider that "feces" and "shit" are very different terms even though they technically describe the same thing.
TLDR: when choosing your prompt, think not just about what's in the image but also who would say something like this.
Prompt Length
Be descriptive. The model does better if you give it longer, more detailed descriptions of what you want. Use redundant descriptions for parts of the prompt that you care about.
Note however, that there is a hard limit regarding the length of prompts. Everything after a certain point - 75 or 76 CLIP tokens depending on how you count - is simply cut off. As a consequence it is preferable to use keywords that describe what you want concisely and to avoid keywords that are unrelated to the image you want. Words that use unicode characters (for example Japanese characters) require more tokens than words that use ASCII characters.
Punctuation
Use it. Separating keywords by commas, periods, or even null characters ("\0") improves image quality. It's not yet clear which type of punctuation or which combination works best - when in doubt just do it in a way that makes the prompt more readable to you.
Emphasis
The common wisdom is that putting a keyword in square brackets or appending an exclamation mark increases its effect while putting a keyword in round brackets decreases its effect; Using more brackets or exclamation marks results in a stronger change. However, when this was tested with simple test prompts this effect could not be observed. Specifically, someone made short, simple test prompts that specify two different things and tested how the image changes if one of those things is strengthened with [] while the other thing is weakened with (). The test cases were flowers being red or blue and a woman being a doctor or a vampire. The specific prompts and samples are in the samples archive linked below.
The repetition of a certain keyword did work to increase its effect.
Specificity
The model has essentially learned the distribution of images conditional on a certain prompt. For the training of neural networks the quality of features is important: the stronger the connection between the inputs and the outputs is, the easier it is for a neural network to learn the connection. In other words, if a keyword has a very specific meaning it is much easier to learn how it connects to images than if a keyword has a very broad meaning. In this way, even keywords that are used very rarely like "Zettai Ryouiki" can produce very good results because it's only ever used in very specific circumstances. On the other hand, "anime" does not produce very good results even though it's a relatively common word, presumably because it is used in many different circumstances even if no literal anime is present.
Choosing specific keywords is especially important if you want to control the content of your images. Also: the less abstract your wording is the better. If at all possible, avoid wording that leaves room for interpretation or that requires an "understanding" of something that is not part of the image. Even concepts like "big" or "small" are problematic because they are indistinguishable from objects being close or far from the camera. Ideally use wording that has a high likelihood to appear verbatim on a caption of the image you want.
Movement and Poses
If possible, choose prompts that are associated with only a small number of poses. A pose in this context means a physical configuration of something: the position and rotation of the image subject relative to the camera, the angles of the joints of humans/robots, the way a block of jello is being compressed, etc. The less variance there is in the thing that you're trying to specify the easier it is for the model to learn. Because movement by its very definition involves a dramatic change in the pose of the subject, prompts that are associated with movement frequently result in body horror like duplicate limbs.
TLDR: good image of human standing/sitting is easy, good image of human jumping/running is hard.
Miscellaneous
- Unicode characters (e.g. Japanese characters) work.
- Capitalization does not matter.
- At least some unicode characters that are alternative versions of latin characters get mapped to regular latin characters. Full-width latin characters as they're used in Japanese (e.g. ABC) are confirmed to be converted.
- Extra spaces at the beginning and end of your prompt are simply discarded.
Keywords
The most reliable way to find good keywords is to look at the keywords that are used to generate images that are similar to what you want. Alternatively there are multiple websites that let you explore various art styles and other modifiers (see links below). Below are some (unconventional) known good keywords (as determined by using keywords as prompts without other keywords or in very short and simple prompts). The underlying assumption is that the keywords will also be good as part of large prompts; if they are not, please provide feedback. When the list tells you to avoid keywords the reason is that they simply produce Garbage. Keywords that produce unexpected unsafe outputs have an explicit warning. An archive with the samples used to judge these keywords is linked below.
Weebshit
Anime and other Japanese things:
- "anime": generic, mediocre anime-style images, looks somewhat like the 2000s. Since "anime" is associated with many low-quality/unrelated images a common strategy is to just specify a drawing and use Japanese words in your prompt to associate your prompt with what a Japanese person would be likely to draw (i.e. anime). For style variations try "アニメ" (Japanese way to write anime, looks more modern), "chibi", "Kyoto Animation", "light novel illustration", "shonen", "Studio Ghibli", "visual novel CG", or "Yusuke Murata" (artist of the One-Punch Man manga). Avoid "manga", "tankobon", and "waifu". Order of keywords is simply alphabetical.
- "cosplay": pictures of western people cosplaying. "コスプレ" is pictures of Japanese people cosplaying.
- "hentai": bad. "エロアニメ", "エロゲ", "エロ同人", and "エロ漫画" less bad. "エロゲ" and "エロ同人" also produce 3D.
- "ikemen": handsome Japanese men. Avoid "イケ面" (Japanese spelling).
- "Gothic Lolita": frilly black dresses.
- "manga", "tankobon": generic anime-style images, artifacts from text and paneling, also associated with pictures of physical copies. "漫画" and "マンガ" look better but also have artifacts. "漫画" seems to be more associated with manga for adults while "マンガ" is more associated with manga for children.
- "nendoroid": brand of plastic figures for characters from anime, manga, and video games. Avoid "ねんどろいど".
- "oneshota": cute anime boys.
- "Sweet Lolita": frilly pink dresses.
- "to-love-ru": characters from the franchise. Avoid "toraburu" and "To LOVEる".
- "Touhou", "Touhou Project": characters from the franchise. Avoid "東方".
- "waifu": modern Japanese women.
- "Zettai Ryouiki": short skirt in combination with stockings or socks, visible thighs. Avoid "絶対領域" (kanji spelling).
- "アイドル", "aidoru": Japanese idols. "アイドル" is mostly 3D, "aidoru" is mostly 2D.
- "ガンプラ", "gunpla": Gundam plastic models. Avoid "ganpura".
- "イラストレーション", "イラスト": illustrations in Japanese style (I think, definitely not "anime" style), "イラスト" looks more abstract than "イラストレーション".
- "美女", "美人": Japanese women, classical beauty standard.
- "男性": literal meaning is just man/male gender but the result is Japanese gay porn.
- "彼女", "kanojo": Japanese women, kanojo also contains 2D and pictures of couples.
- "可愛い", "かわいい": pronounced "kawaii", cute things. On its own "可愛い" produces pictures of birds, "かわいい" general cute things. However, good results were achieved with "可愛い" in long prompts so testing the single keywords may be inaccurate. Avoid "kawaii" and "カワイイ".
- "女": Chinese/Japanese women.
- "巨乳", "爆乳", "おっぱい": Japanese women with large breasts, either topless or wearing a bra.
Subreddits
Stable Diffusion has learned which kind of image gets posted to which subreddit. Unfortunately for most subreddits it has learned incomprehensible garbage, typically because the images contain a lot of text. Subreddits that are essentially just image dumps work pretty well though:
- "r/aww", "r/awwducational": cute images of cats and dogs. Avoid "aww".
- "r/battlestations", "battlestations": pictures of desktop PCs.
- "r/creepy": creepy images, mostly drawings of faces. Avoid "creepy".
- "r/EarthPorn", "EarthPorn": landscape photography.
- "r/evilbuildings", "evilbuildings": buildings that look like they're owned by a super villain or evil corporation. "evil buildings" is random skyscrapers.
- "r/eyes": bright blue eyes + conventionally attractive faces.
- "r/Fitness": muscular women. "Fitness" is just pictures of women working out. "Reddit fitness" seems to be interpreted similarly to "r/Fitness".
- "r/gardening": pictures of home gardens. "gardening" is pictures of garden work.
- "r/interestingasfuck": can give you cool textures but can also fuck up your images.
- "r/InternetIsBeautiful": abstract colorful images.
- "r/OldSchoolCool": vintage photographs, has more varied and interesting subjects compared to "vintage photograph".
- "r/SkyPorn", "SkyPorn": pictures of the sky.
All of the 100 largest subreddits were tested. The ones not listed here produced either garbage or unremarkable results.
Note that for some subreddits it has been confirmed that "/<subreddit>" and "<random letter>/<subreddit>" produce nearly identical results. These may be adversarial examples: in the training data there are presumably many images associated with the string "r/<subreddit>" and basically none with other letters. Instead of learning the meaning of "r/<subreddit>" SD may therefore have simply learned a meaning for "/<subreddit>" because with the training data the two terms were virtually interchangeable.
Miscellaneous
- "bobs and vagene", "Mr. Dr. Durga sir", "please do the needful": do not redeem the prompt
- "hodl": memecoins. "diamond hands" and "paper hands" are taken literally.
- "E=mc2": Albert Einstein.
Useful Links
- GFPGAN: Tool for fixing faces
- krea.ai: Website that lets you explore keywords
- promptoMANIA prompt builder
- clip-retrieval: Project that lets you determine the relationship between images and keywords, works in either direction. Online version here
- Archive of samples produced by individual keywords
- Google Arts & Culture: can be used to discover artists, art movements, mediums, etc.