Stable diffusion huggingface
For more information, you can check out the official blog post.
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more detailed instructions, use-cases and examples in JAX follow the instructions here. Follow instructions here. Model Description: This is a model that can be used to generate and modify images based on text prompts. Resources for more information: GitHub Repository , Paper. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people.
Stable diffusion huggingface
This model card focuses on the model associated with the Stable Diffusion v2 model, available here. This stable-diffusion-2 model is resumed from stable-diffusionbase base-ema. Resumed for another k steps on x images. Model Description: This is a model that can be used to generate and modify images based on text prompts. Resources for more information: GitHub Repository. Running the pipeline if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to EulerDiscreteScheduler :. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:.
Training Data The model developers used the following dataset for training the model:.
Latent diffusion applies the diffusion process over a lower dimensional latent space to reduce memory and compute complexity. For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, take a look at the Stability AI announcement and our own blog post for more technical details. You can find the original codebase for Stable Diffusion v1. Explore these organizations to find the best checkpoint for your use-case! The table below summarizes the available Stable Diffusion pipelines, their supported tasks, and an interactive demo:. To help you get the most out of the Stable Diffusion pipelines, here are a few tips for improving performance and usability. These tips are applicable to all Stable Diffusion pipelines.
This repository contains Stable Diffusion models trained from scratch and will be continuously updated with new checkpoints. The following list provides an overview of all currently available models. More coming soon. Instructions are available here. New stable diffusion model Stable Diffusion 2. Same number of parameters in the U-Net as 1. The above model is finetuned from SD 2.
Stable diffusion huggingface
The Stable Diffusion 2. The text-to-image models in this release can generate images with default resolutions of both x pixels and x pixels. For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official announcement post. Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image:. Here are some examples for how to use Stable Diffusion 2 for each task:. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! Diffusers documentation Stable Diffusion 2.
Noragami characters
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. See the documentation on reproducibility here for more information. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. Research on generative models. During training,. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. The eye movement slightly changed and looks nice. Skip to content. We currently provide the following checkpoints: base-ema. Using Diffusers. You are viewing v0. Intentionally promoting or propagating discriminatory content or harmful stereotypes. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
Getting the DiffusionPipeline to generate images in a certain style or include what you want can be tricky. This tutorial walks you through how to generate faster and better with the DiffusionPipeline.
To quickly try out the model, you can try out the Stable Diffusion Space. Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. For more details on how the whole Stable Diffusion pipeline works, please have a look at this blog post. For more detailed instructions, use-cases and examples in JAX follow the instructions here. This checker works by checking model outputs against known hard-coded NSFW concepts. The model was trained on crops of size x and is a text-guided latent upscaling diffusion model. As a result, we observe some degree of memorization for images that are duplicated in the training data. Taking Diffusers Beyond Images. Excluded uses are described below. Latest commit. Currently six Stable Diffusion checkpoints are provided, which were trained as follows. Join the Hugging Face community. Downloads last month 3,, Probing and understanding the limitations and biases of generative models. The most obvious step to take to improve quality is to use better checkpoints.
It goes beyond all limits.