LLMs have enabled us to solve a new class of problems with more flexibility than ever, but as they are language models, they are inherently text powered, which has led to AI-based UI being incredibly text heavy.
As someone who has been creating experiences with web technology my entire life, I’m not satisfied with so much UI being replaced with text. At Vetted, we have been building a shopping research assistant, and shopping is an inherently visual and UI heavy space. Products need to display images and the UI needs to present structured data to allow users to navigate between products and compare them.
Over the years we have been experimenting with new ways to incorporate rich UI components into our LLM responses. We’ve done this by supplementing our text payloads with structured product data that can be rendered as product cards and research components, but we weren’t happy with these elements being split out from the core of the text answer. So, I set out to come up with a way to allow the LLM output to incorporate UI components into our markdown output.
MDX Enhanced Dynamic Markdown
Check out react-markdown-with-mdx . It’s an HOC (Higher-Order Component) wrapper around react-markdown that enhances its markdown processing to support JSX component tags. You can register white-listed React components with it, ensuring only a safe subset of JSX component tags can be rendered. The library comes with an optional validation helper function, mdxComponent which takes your React components with a zod validator for validating the JSX attributes.
This enables you to prompt your LLM calls to generate JSX tags that can be consumed safely with a clean and easy integration. Here’s what it looks like in action in our prototype UI at Vetted:
The code looks something like this:
import ReactMarkdown from " react-markdown " import { withMdx , mdxComponent , type MdxComponents } from " react-markdown-with-mdx " const MdxReactMarkdown = withMdx ( ReactMarkdown ) interface MdxMarkdownRendererProps { markdown : string } const MdxMarkdownRenderer : React . FC < MdxMarkdownRendererProps > = ({ markdown , }) => { return ( < MdxReactMarkdown components = { components } > { markdown } MdxReactMarkdown > ) } const components : MdxComponents = { " card-carousel " : mdxComponent ( MdxCardCarousel , z . object ({ children : z . any () }) ), " editorial-card " : mdxComponent ( MdxEditorialCard , z . object ({ id : z . string (), award : z . string (). optional (), rating : z . string (). optional (), ranking : z . string (). optional (), children : z . any () }) ), " product-card " : mdxComponent ( MdxProductCard , z . object ({ name : z . string () }) ) }
Unlike projects like MCP-UI, these components aren’t loaded in externally via an iframe that needs window message passing to be integrated, and they aren’t relegated into a separate message apart from the main generated text response. These components are processed into framework-native React components that are colocated and embedded directly into the main LLM generated text. It essentially enables HTML component-like behavior in React and other JSX frameworks, allowing you to extend markdown with any UI component you can dream of!
In order to power the response streaming shown in the video, it was also necessary to balance the HTML tag tree and truncate incomplete tags to ensure that the MDX parser could be provided with valid HTML and partial tag tokens would be blocked until they were complete. To enable that I also created html-balancer-stream which allows you to auto close and balance unclosed tags or strip them out until they’re ready for true streaming support. It provides both streaming and non streaming APIs.
... continue reading