A Coding Guide to Compare Three Stability AI Diffusion Models (v1.5, v2-Base & SD3-Medium) Diffusion Capabilities Side-by-Side in Google Colab Using Gradio

In this practical tutorial, we will cancel the creative capabilities of the pioneering models of the industry in the industry, the deployment of a stable V1.5, stability, AI, and extremist spread 3, to generate attractive images. It works completely in Google Colab with the Gradio interface, and we will face comparisons alongside three powerful pipelines, fast repetition, and smooth inference at the GPU level. Whether we are a marketer looking to raise our visual narration of our commercial brand or a developer that yearns for the initial model of the functioning of the content that AI moves, this tutorial shows how open source stability models can be published immediately and at any cost of infrastructure, allowing you to focus on narrating stories, sharing and real leadership results.
!pip install huggingface_hub
from huggingface_hub import notebook_login
notebook_login()
We install the Hugingface_hub
!pip uninstall -y torchvision
!pip install --upgrade torch torchvision --index-url https://download.pytorch.org/whl/cu118
!pip install --upgrade diffusers transformers accelerate safetensors gradio pillow
We first uninstall any Torchvision present to clarify potential conflicts, then we reinstall Torch and Torchvision from Cuda 11.8-compatible Pytorch wheels, and finally major libraries promotions, conversion devices, transfers, Safeensors, Gradio, and Budio, to ensure that they are there are there Editions in construction and operation of GPU-illustrations.
import torch
from diffusers import StableDiffusionPipeline, StableDiffusion3Pipeline
import gradio as gr
device = "cuda" if torch.cuda.is_available() else "cpu"
We import Pytorch alongside both Diffusion V1 and V3 pipelines from the Publishers Library, as well as Gradio to build interactive experimental offers. Then it checks the availability of Cuda and puts the device variable on “Cuda” if there is a graphics processing unit; Otherwise, it belongs to the “CPU”, ensuring that your models are operated on optimal devices.
pipe1 = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
safety_checker=None
).to(device)
pipe1.enable_attention_slicing()
We download the V1.5 stable proliferation form (FLOAT16) without the integrated safety auditor, and transfer it to your chosen device (GPU, if available), then allows attention to reduce the use of the peak of VRAM while generating the image.
pipe2 = StableDiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-base",
torch_dtype=torch.float16,
safety_checker=None
).to(device)
pipe2.enable_attention_slicing()
We download the stable “foundation” V2 model with a resolution of 16 -bit without the virtual safety filter, transfer it to your chosen device, and running attention to improving the use of memory during reasoning.
pipe3 = StableDiffusion3Pipeline.from_pretrained(
"stabilityai/stable-diffusion-3-medium-diffusers",
torch_dtype=torch.float16,
safety_checker=None
).to(device)
pipe3.enable_attention_slicing()
We withdraw the stable point for stable 3 “medium” with a resolution of 16 -bit (skip the integrated safety auditor), transfer it to your chosen device, and enable attention to reduce the use of GPU memory during the generation.
def generate(prompt, steps, scale):
img1 = pipe1(prompt, num_inference_steps=steps, guidance_scale=scale).images[0]
img2 = pipe2(prompt, num_inference_steps=steps, guidance_scale=scale).images[0]
img3 = pipe3(prompt, num_inference_steps=steps, guidance_scale=scale).images[0]
return img1, img2, img3
Now, this function runs the same text router through all the three lines loaded (PIPE1, PIPE2, PIPE3) using the specific inference steps and a steering scale, then the first image is returned from each of them, making it ideal for comparing the outputs through stable V1.5, V2-Base and V3-MEDIUM.
def choose(selection):
return f"✅ You selected: **{selection}**"
with gr.Blocks() as demo:
gr.Markdown("## AI Social-Post Generator with 3 Models")
with gr.Row():
prompt = gr.Textbox(label="Prompt", placeholder="A vibrant beach sunset…")
steps = gr.Slider( 1, 100, value=50, step=1, label="Inference Steps")
scale = gr.Slider( 1.0, 20.0, value=7.5, step=0.1, label="Guidance Scale")
btn = gr.Button("Generate Images")
with gr.Row():
out1 = gr.Image(label="Model 1: SD v1.5")
out2 = gr.Image(label="Model 2: SD v2-base")
out3 = gr.Image(label="Model 3: SD v3-medium")
sel = gr.Radio(
["Model 1: SD v1.5","Model 2: SD v2-base","Model 3: SD v3-medium"],
label="Select your favorite"
)
txt = gr.Markdown()
btn.click(fn=generate, inputs=[prompt, steps, scale], outputs=[out1, out2, out3])
sel.change(fn=choose, inputs=sel, outputs=txt)
demo.launch(share=True)
Finally, the Gradio app creates a three-column user interface where you can enter a text router, adjust the steps of inference and the router scale, then create and view the photos from SD V1.5, V2-Base and V3-MEDIUM side by side. It also features a radio specified, allowing you to determine your preferred model, and displays a simple confirmation message when making an option.
In conclusion, by integrating modern spreading structures from AI in the easy -to -use Gradio application, I have seen how you can be able to the initial model and compare and publish amazing images that resonate on today’s platforms. From the creative trends A/B-Testing to the automation of the assets of campaigns on a large scale, AI provides performance, flexibility and vibrant community support to convert your content pipeline.
verify Clap notebook. Do not forget to follow us twitter And join us Telegram channel and LinkedIn GrOup. Don’t forget to join 90k+ ml subreddit. For promotion and partnerships, please speak to us.
🔥 [Register Now] The virtual Minicon Conference on Agency AI: Free Registration + attendance Certificate + 4 hours short (May 21, 9 am- Pacific time)

Niegel, a trainee consultant at Marktechpost. It follows an integrated double degree in materials at the Indian Institute of Technology, Khargpur. Nichil is a fan of artificial intelligence/ml that always looks for applications in areas such as biomedics and biomedical sciences. With a strong background in material science, it explores new progress and creates opportunities to contribute.
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-05-05 23:48:00