OptiPix

AI Image Generator — Free, Private, Runs in Your Browser

Generate images using Stable Diffusion and LCM models that run entirely on your device. No uploads, no server, no account needed.

Checking your GPU for WebGPU + fp16 support…

Choose a model

Model license

SD Turbo weights are distributed under the Stability AI Non-Commercial Research Community License. Generated images are yours to use, but the underlying model is restricted to research and non-commercial use. OptiPix.art is a free tool funded by voluntary donations and fits that definition. If you plan to use generated images in a paid product, check the license first.

All models we've evaluated

ModelSizeSpeedQualityStatus
SD Turbo (fp16)~2.4 GB2–10 s per image★★★★Recommended
LCM Dreamshaper v7~800 MB3–10 s★★★☆☆Coming soon
Stable Diffusion 1.5~1.7 GB30–90 s★★★★Coming soon
SDXL Turbo~2.1 GB4–8 s★★★★Coming soon
FLUX.1 schnell~12 GBn/a★★★★★Too large
Google Imagen 3Servern/a★★★★★Server-only

Device requirements

TierHardwareExpected experience
Recommended16 GB+ RAM, dedicated GPU (RTX 3060 / M2 Pro+), Chrome/Edge 137+2–5 s per image after initial download
Minimum8 GB unified memory (M1 Mac / M2 MacBook Air), WebGPU + fp168–30 s per image, occasional OOM
Not supportedPhones, tablets, Safari, older laptops, GPUs without shader-f16Cannot run SD Turbo

How browser-native AI image generation works

Traditional AI image generators like Midjourney, DALL·E, and Adobe Firefly run on remote servers. You type a prompt, it travels across the internet to a data center, a GPU cluster produces an image, and the result is sent back to your browser. That is fast and convenient, but it means your prompts (and, in some cases, any reference images you upload) leave your device. It also means you cannot work offline, and you usually need a paid account to continue using the service.

Browser-native AI image generation is different. A modern browser exposes your GPU directly to JavaScript through an API called WebGPU. Using libraries built on top of ONNX Runtime Web or the Transformers.js runtime, a diffusion model — the same class of model that powers Stable Diffusion and FLUX — can be downloaded once from HuggingFace and then run entirely on your machine. Your prompt never leaves your browser. There is no account, no API key, no cost per image, and no usage quota. The only limits are your GPU's memory and speed.

We ship the tool on OptiPix.art with full transparency: before you generate anything, we probe your GPU and tell you whether it can handle the model you've selected. We show every model we evaluated — including the ones that are too large for browsers or too experimental to depend on. We show the exact number of diffusion steps, the elapsed time, and the seed used for every image, so you can reproduce any result. And because nothing leaves your machine, you keep all the rights, all the privacy, and all the speed of a local workflow — without installing anything.

Frequently asked questions

What models are available?+

We list six in the transparency table: LCM Dreamshaper v7 (recommended, ~800 MB), Stable Diffusion 1.5 (~1.7 GB), and SD Turbo (~1.5 GB) are the three browser-native options. SDXL Turbo is tracked as coming soon. FLUX.1 schnell is too large for browser deployment (~12 GB). Google Imagen 3 is server-only and incompatible with our privacy-first approach.

Why does it need WebGPU?+

Diffusion models do billions of matrix operations per image. Running that on CPU takes minutes per image. WebGPU is a modern browser API that gives JavaScript direct access to your GPU — the same hardware that renders games and video. Without WebGPU, generation is impractical. WebGPU is stable in Chrome 113+ and Edge 113+ on desktop.

Will it work on my phone?+

No. Smartphones and tablets don't have the GPU memory needed (minimum ~4 GB VRAM for 512×512 generation). Mobile Chrome also doesn't expose WebGPU in most configurations. For mobile users we recommend our other image tools, all of which run on any device.

How long does generation take?+

It depends entirely on your GPU. On an RTX 3060 or Apple M2 Pro: LCM Dreamshaper produces a 512×512 image in 5–15 seconds, SD 1.5 in 30–60 seconds, SD Turbo in 2–5 seconds. On integrated graphics it's several minutes per image. We show the elapsed time live so you always know where you stand.

Is my prompt sent to a server?+

No. Everything — the model, the prompt, the generated image — stays on your device. The model is downloaded once from HuggingFace's CDN on first use, then cached in your browser. After that, the page works offline. We have no server, no API, no logging, no account system. Your prompts are private.

☕ Love this tool? Support the developer.

OptiPix.art is 100% free — no ads, no limits, no data collection. Your support keeps every tool free for everyone.

$

🔒 Secure payment via Stripe · No account needed

All 19 Tools