Flux for UX: a new design paradigm?

I think we might be on the cusp of a new UX design paradigm, powered by the latest gen AI image models.

Three days ago Flux was released, it is a new image-to-text model, similar to Midjourney, but with two incredible upgrades; 1) amazing “prompt adherence”, and 2) setting a new benchmark in text generation.

What does this mean? Look at the experiment images below to find out!


You now can use text-to-image to design complex layouts that include text, such as web pages and mobile apps. You can see this demonstrated in the images below. And then by modifying the prompt, I was able to iterate on the design using just text instructions.

No, the results right now are not perfect, but these models keep improving all the time. There is no technical reason to think that text generation and prompt adherence can’t be perfect within 12 months.

Previously, image generation models (Stable Diffusion, Midjourney, Dall-E etc) were terrible at displaying text. And also, could not understand and interpret long complicated prompts. Now, with Flux this is possible.

I ran some tests of Leonardo.AI’s new Phoenix model (released in June) vs Midjourney 6.1 (released last week) vs Flux.1 Pro (released on Friday), and Flux came out on top, with Leonardo a close second. The experiment below is all using this new Flux model. I posted the results of my tests on LinkedIn.

What could the future of UX design look like? For professional designers who know how to use Figma, Photoshop etc this might not apply, but for non-professionals this could be an alternative to template (Canva) and no-code / low-code (Wix, Squarespace) products. Or perhaps it could be incorporated into those products.

Right now, the most obvious use is for brainstorming, for quick mock-ups and revisions of concept designs. Which you would then take into one of the existing tools to create the final designs.

It would be easy to have this to accept voice instructions and run off your mobile. If you need to redesign your website on the tram on the way to work, you soon can 🙂

I expect that someone will create an awesome new UX design ideation app with diffusion models at the core. If anyone is working on this please let me know!

Previous
Previous

10 Practical Lessons for Business from the DTA Government AI Guidelines

Next
Next

Why this Aussie start-up was worth $370m to Canva