Skip to content
bizurk

Topical cluster

Local AI Image Generation

The pipeline behind a solo operator's image stack. A Mac writes bespoke prompts and reviews outputs, a Windows box with the dGPU runs the diffusion models, a feedback ledger turns hundreds of reviews into a prompt taxonomy. Notes on Flux Schnell vs Z-Image Turbo vs Qwen, swap-pressure safety, cluster-uniform visual systems, and the tooling that keeps the loop from eating the week.

4 postsFor: Creative technologists and solo operators generating production imagery locally

Go deeper

Z-Image Turbo vs Flux Schnell is not a drop-in swap

IMAGE AI·APR 24·11 MIN

Z-Image Turbo vs Flux Schnell is not a drop-in swap

Z-Image Turbo vs Flux Schnell: same prompt, different brain. Rich layered prompts that land in Flux collapse in Z-Image. Empirical evidence across rounds.

READ →

mflux ate my Mac: the safety wrapper I wish I'd written first

IMAGE AI·APR 24·14 MIN

mflux ate my Mac: the safety wrapper I wish I'd written first

Running mflux in a loop crashed my Mac mid-batch. Here is the mflux memory safety wrapper I wrote after: swap gate, cooldowns, timeouts, pkill escape.

READ →

The image feedback ledger that became a prompt taxonomy

IMAGE AI·APR 24·12 MIN

The image feedback ledger that became a prompt taxonomy

Across 21 review rounds and ~500 AI image generations, a small JSON ledger became an AI image prompt taxonomy. Here is what it caught and missed.

READ →

More on this cluster

Why this matters.

Running diffusion models locally became a real option around mid-2025 once Flux Schnell and Z-Image Turbo landed with open weights. By 2026 the math is straightforward for a creative-tech operator: a one-time GPU investment beats a monthly cloud-API bill within the first quarter, and the privacy and iteration speed of a local pipeline is a different category of working entirely. The catch is the surrounding tooling. Prompt engineering, output review, ledgering, taxonomy work, and the orchestration between the prompt machine and the GPU box make or break the actual velocity.

This cluster is the journal of building that pipeline as a one-person studio. Mac as the prompt and review surface. Windows or Linux box with a dGPU as the diffusion runtime. A feedback ledger that turns hundreds of reviews into a documented prompt taxonomy. Decisions between Flux Schnell, Z-Image Turbo, and Qwen for different use cases. Swap-pressure safety so a model upgrade does not silently regress the visual system. Cluster-uniform image styles for editorial use.

If you generate production imagery and the cloud-API bill is creeping past your iteration budget, start with the local-pipeline hub piece before the next render queue.

Put this to work

Running Flux, Z-Image, and Qwen locally without the cloud-API bill.

> See the Operator's Stack

Let’s fix
some problems.

Three short steps below. I read all of these, it’s just me on the inbox. Usually you get a real reply within a day, sometimes the same day if I’m at the desk.

or email direct hello@michaeldishmon.com

Step 1 of 3: What you need

01 / 03WHAT YOU NEED

What’s slowing you down right now?

Pick anything that applies. Multiple is normal.

$ cat lead.json | mail -s 'new signal' michael