March 3rd, 2025

The art of the vibe

This is a write-up about the making of Vibe HN, a daily LLM-powered editorialization of Hacker News.

I’m currently working on rewiring my information consumption habits—moving away from a stream-based approach toward something that better protects my focus. I’ve already cleaned up my inbox, removed impersonal emails, and funneled the newsletters that actually interest me into an RSS reader that I check maybe once a day—on my own terms.

After all this, and having reacquainted myself with an RSS reader, I tentatively added some RSS feeds as well. But this quickly felt like a regression: I was reintroducing the very pattern I had sought to eliminate—an overwhelming stream of notifications.

This is, of course, the nature of RSS feeds as they’re conventionally designed (even for this site). The same goes for the Hacker News feed. But what I wanted was a daily version—something that distilled the top stories and their best discussions. Since I couldn’t find anything quite like that, I decided to build it myself.

By the way: I have a hypothesis that emerging web x.0 media could benefit significantly from better packaging. The "information ergonomics" of traditional formats like magazines and newspapers are underappreciated and might have outsized potential in the age of dynamic sources and LLMs—in my opinion.

This side project started taking shape as a small bet in that direction. At the same time, like all good side projects, it aligned with other motivations:

  • Learning more about LLM APIs.
  • Experimenting with RSS feeds for personal use.
  • Testing Google Sheets as a lightweight database for prototypes.

I didn't anticipate uncovering any major insights about LLMs while building this automated news digest—but I found this one noteworthy:

When employing LLMs for creative writing, chaining completions across different tasks yields better results than any single-shot prompt.

Interestingly, this aligns with conventional advice on creative writing: Separate writing from editing. First, generate freely. Then, edit ruthlessly.

I arrived at this realization by accident. Initially, I kept refining a single prompt, trying to force a better output. But I encountered the same issue I’ve often had with Midjourney—fixing one part only created new problems elsewhere, like pressing on a balloon: fix one part, and another bulges out.

It makes me wonder—what other well-established heuristics for optimizing human cognition might apply to working with LLMs?