How Are You Leveraging LLMs for Better UI/UX Design?

I’m struggling to achieve a satisfying level of detail in my UI design work using current LLM tools. I’ve noticed that most coding-oriented LLMs don’t excel in reasoning from a UI/UX perspective.

Recently, I came across discussions suggesting that using LLM agents specifically dedicated to UI and UX might significantly improve results. I also saw something on the roadmap related to agents, but I’m not sure if the plan is to specialize them by domain in the future.

In the meantime, has anyone managed to get good results in terms of UI/UX design? If so, I’d love to hear about your approach, tools, or tips!

Thanks in advance!

This is something I’m interested in and ‘think’ I’ve managed to get the most out of Bolt in terms of UX/UI. Being really descriptive and sharing screenshots of colour schemes seems to help. I’ve got several projects on Netlify links I can share as examples. I do notice that a lot of one-shot projects I see are looking the same so I think pushing the UX/UI is key in creating some unique.

There’s a neat tool called screenshot to code. You can just take a screen shot of something you like and generate an artifact (single page, monolithic code blob) in HTML. If you’re wanting to kind of collage things together, figma. Get on huggingface.co and you can access image generator playgrounds and ask them to generate UI. big-AGI is the framework for basically, well, you’ll see.

Then there’s this project cofounder. Cofounder is special. I would pay for api credits on anthropic and open ai, just the minimum. And run it once. I was hit with like a $30 credit usage in like 45 minutes cause I’m an idiot and didn’t have their (composer) API key entered correctly. The puppeteer is reliant on this api key to stop itself. Thanks Google sheets auto capitalization :sweat_smile: But it uses a very comprehensive approach to generating the UX perspective and the backend.

You know…use that for whatever :sunglasses: