Prototyping the future of websites with RAG and generative UI
... and AI maps to learn anything!
Join in with what I’m building and learning in creative AI ⚡ in this post:
🚀 SmartNav: Browse websites with RAG + generative UI
💡 How I used ChatGPT to prep and succeed in coding interviews
🌀 How might we map and navigate unknowns with LLMs?
These past weeks were busy wrapping up my work with Upstatement and also looking into what I’ll jump into next. That second part was quite a job in itself: a mixture of fulfilling RFP’s for art installations and going through the interview process at multiple companies (coding interviews!). Both new challenges and pretty awesome.
I’m excited to have some time now to work on personal projects and inquiries before my next gig. I also have some more time for exploratory conversations. If you’re interested in working together on creative AI projects — I’m currently open for projects, consulting, and grabbing coffee.
🚀 SmartNav: Using LLMs to improve navigation and browsing on websites
This past month I completed my culminating project with Upstatement. I’m happy with the problems we explored and where we arrived at the end. Here I’ll share a high level recap of what we built, and in the next issue you’ll find more detailed documentation of the work.
What excites me most about working with Upstatement is their craft and skill in the design and development of websites. Among their work, they build websites that are creative, tackle difficult UI challenges, and generally feel high-quality. In the space of AI-powered products, there is enormous opportunity with level of design expertise. What do AI-powered features mean for the future of websites?
When I arrived at Upstatement, I facilitated brainstorms and conversations across the company. I learned about the company practice areas, clients, and problems to solve for them. Together with leadership, we determined the highest impact opportunities. From there we further homed in on AI-powered navigation for the top-tier higher ed niche as our focus.
The opportunity
A key challenge on websites is that people struggle to find what they’re looking for. This is crucial to address, especially for site owners who have several information dense websites across their brand, where browsing and navigation can become complex. Visitors have to work harder to figure out how the sites work, and valuable information may be overlooked.
What if instead of browsing and searching across a site, there was a way to gather all of the relevant information and bring it to you? Sites are no longer static documents, they’re dynamic and updating. What if they could be further generated based on what you’re looking for?
Our goal was to create a prototype to implement these ideas in a real context. More specifically, on MIT pages relevant to new and prospective students. After a design exploration, we settled on two key features: (1) Given a query, search across key documents to synthesize a response and (2) dynamically generate an interface to render the information. These content UI’s are called SmartBlocks.
What we built
I connected key MIT pages across domains to an LLM via retrieval augmented generation. Given queries like “What do I do if I get locked out?” or “Where can I get food?” or “What’s the makeup of the student body?” the application generates blocks of content, with a variety of layouts based on the content, on a webpage that becomes an individual’s personalized resource.
When I found information more easily with this prototype than through existing search, I knew we were on to something. In some cases this would even help surface student blog posts that would otherwise be hidden.
At the beginning I wasn’t entirely sure if the generative UI would even work, but this was an opportunity to explore something new and compelling with AI and UI. I iterated on prompts, figuring out what information what useful to share, and what was best to leave open ended for the LLM. My target was designs that felt thoughtful and opinionated, with some variety to page the generated page enjoyable to browse.
Getting generative UI working and hooking it up to RAG was super gratifying. I’m proud of the prototype we made. In the next issue I’ll go into more detail on the website indexing and RAG, as well as prompt engineering for generative UI.
💡 Prepping for technical interviews with ChatGPT
This past month I had my first technical interviews involving coding live. One was LeetCode-style, diving in algorithms and data structures, and the second was live React dev. I’ve heard so many horror stories of technical interviews, so I was both nervous and excited.
I often use GPT-4 for programming, so I’m confident in it’s knowledge of technical programming topics. So to study, I created a custom GPT, JSCoach GPT, to build a syllabus for me, and then run through each part step by step. The result was a mixture of giving an overview of concepts and algorithms, with example questions. As I went through it I’d ask clarifying questions to go deeper where I needed to, or found youtube videos covering the concept in depth. ChatGPT gave me a map to learn from — an overview, and then a path through it.
I asked it for a “JS cheatsheet” which I could review every few days, leveraging spaced-repetition. I’d go through the concepts in-depth in advance, and use the overview to familiarize myself. I also used it to refresh myself on common React patterns. In the end, I felt comfortable and prepared during the coding interviews.
What I enjoyed about this process was how active it felt. Sometimes I get restless going through tutorials or book chapters. In this case if the response was too longwinded, I’d ask GPT to break it down step by step where I can give feedback before the next concept. This makes the learning process far more engaging. And it made me think of how I could expand my awareness in other topics too — leading to the following inquiry on mapping unknowns.
🌀 wonder zone 🌀 Mapping unknowns
Around this time last year I became obsessed with finding problems to work on as I built LLM prototypes. I found that the challenge isn’t the technology, it is problem discovery. To that end, I wanted to build bridges across industries as a way to identify valuable problems:
The biggest value will come from collaborating with folks outside the software space who are AI-curious. There are problems out there that these tools are well suited for, but the connections haven’t been made yet.
How do you get outside your bubble, and grow an awareness of the wide variety of industries and problems out there? You can do market research and read a ton of articles, finding common threads. Curious how you’ve approached this! If you’ve started a company, what was the path in which you found the problem to focus on? Reply with ideas, stories, musings, all of the above.
I followed this up with a recap after a trip to SF where I threw an AI show & tell. At the event, I asked founders: How did you find the problem you’re working on now? How did you know it was valuable? In short, folks found their problems through (1) Consulting and talking to customers, (2) Joining a company with someone who knows the space, and (3) Just building and launching. You can read more in “Finding people who are self directed and curious and passionate.”
Now a year later I have more experience with building LLM-powered apps and specifically building UIs beyond chat. I’ve used LLMs as a conversation partner for exploring ideas, which has proved useful since it has a rough idea of a bunch of information on the web, including Wikipedia, through its training. However, because of limitations around hallucinations, these are just musings and not truth. I use the conversations as directions to explore, rather than sources of truth.
Through chat, I’ve been able to build a rough idea of a space and then get more detailed — like in the programming tutorial I shared earlier. Sometimes I’ll synthesize these ideas in a doc which I’ll use for further research. After working on a few projects using embeddings to visualize concepts and relationships between them, I think there’s an opportunity here to create a map of a given research area. This way I can retain information I’ve found, zoom in on particular areas to go into more detail, or prompt for new areas of the map I haven’t explored. I’m imagining spatially arranged nodes, perhaps nested in granularity, where the LLM responses are saved and synthesized.
I’d like to apply this UI concept map to my problem discovery exploration. Based on my past experience, I have a good idea of problems in the tech and media space, most specifically consumer media or developer tools. I want to go beyond this. You can imagine this tool being useful for learning all sorts of topics. Use the LLM to build a general map, and then go more specific, organizing the information in a spatially relevant way.
Thanks for reading!
🙌 Follow what I’m up to by subscribing here and see my AI projects here. If you know anyone that would find this post interesting, I’d really appreciate it if you forward it to them! And if you’d like to jam more on any of this, you can reply here or on twitter.
📚 Check out my AI Resources list. I made this list for myself to stay up to date on AI things and organize resources I find helpful.
🤭 I made a book. It’s called “Feeling Great About My Butt,” and is a book of illustrations and words that find ways to make space for feelings of whimsy, devastation, and growth.
📞 Book an unoffice hours conversation: We could talk about something you’re working on, jam on possibilities for collaboration, share past experiences and stories, draw together / make a zine, or meditate.