Three LLMs, One Dashboard Card: A Reality Check on AI-Generated UI

A comparison of Vercel v0, Figma Make, and Framer Workshop proves that LLMs can crank out functional wireframes fast, but still need human designers for polish, nuance, and real-world usability.

Jason Spidle

Jul 28, 2025

Generative AI is barging into interface design, and a recent live-streamed “AI Design Challenge” pitted three emerging UI generators against the same deceptively simple brief: build a dashboard card that can flip between chart and table views, let users add rich-text annotations, and show per-bar tooltips plus a total N count. Two follow-up prompts were allowed for each tool. What happened next was equal parts impressive and humbling—for the models and for anyone betting on a designer-less future.

Vercel v0

Mature Code, Rocky Start The oldest entrant, Vercel v0, immediately reminded the audience that “oldest” doesn’t mean infallible. Its first try broke with a familiar React error (“Tabs content must be within Tabs”) and simply printed raw JSON instead of a chart. One follow-up prompt to “fix the error” got the bars rendering and unlocked basic functionality: a Chart/Table toggle, an “Add Annotation” modal, and color-coded bars. Still, v0 never surfaced individual bar tooltips or the requested total N. Most of the allotted prompts were spent just getting the component to appear—evidence that even a year of head-start can’t guarantee reliability.

Figma Make

Prompt Fidelity Wins the Day Figma’s brand-new generator also stumbled out of the gate, producing compile errors and asking, almost sheepishly, “Fix these errors?” Once patched, however, it delivered the most on-brief output of the event. The total N sat neatly beneath the chart, the modal kept the data visible while notes were typed, and a proper rich-text toolbar replaced markdown shortcuts. A consolidated tooltip appeared—useful, though not exactly what the brief demanded. A second prompt requesting per-segment tooltips half-worked (hovering any bar surfaced only the red segment’s data), proving Figma Make still needs hand-holding. Even so, it was the only model that ticked every checklist item without cosmetic chaos.

Framer Workshop

Configurable—but Cramped Framer’s Workshop took a component-first approach, spitting out a self-contained widget with sidebar controls for bar count and labels. Novelty points aside, its layout was visibly broken: axis labels overlapped the bars, the annotation stack pushed content off the card, and enlarging the component to 600 × 480 px via prompt merely stretched the mess. One elegant touch did stand out: the “Add Annotation” button sat exactly where the annotation would later appear, a small but thoughtful interaction choice. Unfortunately, two extra prompts couldn’t rescue the visual clutter.

The Human Element

Where LLMs Still Need Us Across all three tools, the same pattern emerged. The models handled scaffolding—HTML, state toggles, basic modals—astonishingly fast, effectively generating interactive wireframes. But they failed at polish and nuance: hierarchy, spacing, tooltip logic, brand voice, and accessibility weren’t just off—they were unconsidered. Human designers bring context (“Why should annotations stay visible while users type?”), empathy (“Can every color-blind user read these bars?”), and creative intent (turning a raw chart into a narrative). In the post-demo comparison, a manually refined version of the card showcased these subtleties: centralized controls, side-by-side editing, and on-hover notes appearing exactly where they were written—elements no model produced unassisted.

The challenge underlined a clear takeaway for the design community: today’s UI generators are remarkable accelerators for early-stage wireframes, yet they’re miles away from delivering production-ready experiences. Designers who learn to orchestrate these tools—using machine speed to rough-in flows and human judgment to perfect them—will outpace both pure manual work and pure automation. The future isn’t AI versus designers; it’s AI plus designers, each doing what they do best.

The Full Video

Disclaimer: This article was at least partially generated by a large language model, but every fact, example, and opinion was verified and edited by Jason Spidle.

Ready to unlock your product's potential?

We have decades of experience tackling thorny problems in industries like healthcare, agriculture, education, finance, and AI—and we love a good challenge. If you’ve got a bold vision and you’re ready to sprint, we’ll be right there with you.

Let’s connect and figure out how to move the needle on your product, your processes, and your business results.