Stable Diffusion is an artificial intelligence (AI) model that creates images. It works similarly to other generative AI models like ChatGPT. When provided with a text prompt, Stable Diffusion creates images based on its training data.
Stable Diffusion is a computer program that creates images when provided with text prompts. For example, the prompt "apple" would produce an image of an apple. It can also take more complicated prompts like creating the image of an apple in a specific artistic style.
In addition to generating images, it can replace parts of an existing image and extend images to make them bigger. Adding or replacing elements within an image is called inpainting, and extending an image to make it bigger is called outpainting. These processes can alter any image, whether the original image was made with AI or not.
The Stable Diffusion model is open source, so anyone can use it.
AI can generate images in several different ways, but Stable Diffusion uses something that's known as a latent diffusion model (LDM). It starts with random noise that resembles an analog television's static. From that initial static, it goes through many steps to remove noise from the picture until it matches the text prompt. This is possible because the model was trained by adding noise to existing images, so it's essentially just reversing that process.
Stable Diffusion was trained on many images from the internet, primarily from websites like Pinterest, DeviantArt, and Flickr. Each image was captioned with text, which is how the model knows what different things look like, can reproduce various art styles, and can take a text prompt and turn it into an image. The model can also be fine-tuned using other sets of images to produce different results.
Stable Diffusion is used to generate images based on text prompts and to alter existing images using the processes of inpainting and outpainting. For example, it can create an entire image based on a vivid text description, or it can replace a small portion of an existing image.
Stable Diffusion can create photorealistic images that are difficult to differentiate from the real thing and images that are tough to tell apart from hand-drawn or painted artwork. It can also turn out images that are clearly fake depending on the prompts and other factors.
One way to spot AI-generated art is to look at the hands, as Stable Diffusion and other models have a lot of trouble in that area. If the subject of an image is conspicuously hiding their hands, that's a tip that someone used some clever prompt engineering to get around the shortcomings of the AI model. Keep in mind, however, AI models are changing incredibly fast, so these shortcomings are likely going to be short-lived.
Images generated by Stable Diffusion can theoretically be used for any purpose, but there are a number of pitfalls related to AI-generated content.
Because AI image generation has to learn about objects from somewhere, its programmers have scraped the internet for art with metadata. They did so without permission from the source art's creators, which raises issues of copyright.
This issue is particularly iffy since Stable Diffusion doesn't create its images from scratch; it cobbles them together from ones its studied. So both from learning and creating, it uses other artists' work whether they've granted permission or not. Sites like DeviantArt have only avoided mass exits by letting users opt out of letting AI systems use their art for training.
The subject of copyrighting works that were created in part by AI is also murky, as copyright applications for works that included AI-generated elements have been refused. Despite that, as AI-driven image generation becomes more prevalent, it threatens the livelihoods of traditional artists, who stand to lose work to this cheaper, "easier" method.
FAQ"AI art" is a blanket term for Stable Diffusion, Midjourney, DALL-E, and other natural-language image generators. Each version may use different ways to train and create pictures, but they all fall under the "ai art" description.
AI art has trouble with both hands and teeth. The reason is because while generators "know," generally, what these body parts are, they don't understand the typical numbers of fingers or teeth that human beings have.