Tech News
← Back to articles

Designers teach AI models to generate better UI in new Apple study

read original related products more articles

Apple continues to explore how generative AI can improve app development pipelines. Here’s what they’re looking at.

A bit of background

A few months ago, a team of Apple researchers published an interesting study on training AI to generate functional UI code.

Rather than design quality, the study focused on making sure the AI-generated code actually compiled and roughly matched the user’s prompt in terms of what the interface should do and look like.

The result was UICoder, a family of open-source models which you can read more about here.

The new study

Now, a part of the team responsible for UICoder has released a new paper titled “Improving User Interface Generation Models from Designer Feedback.”

In it, the researchers explain that existing Reinforcement Learning from Human Feedback (RLHF) methods aren’t the best methods to train LLMs to reliably generate well-designed UIs, since they “are not well-aligned with designers’ workflows and ignore the rich rationale used to critique and improve UI designs.”

To tackle this problem, they proposed a different route. They had professional designers directly critique and improve model-generated UIs using comments, sketches, and even hands-on edits, then converted those before-and-after changes into data used to fine-tune the model.

This allowed them to train a reward model on concrete design improvements, effectively teaching the UI generator to prefer layouts and components that better reflected real-world design judgment.

... continue reading