How I Organize Email with HEY

18 Apr, 2025 by Graham Marlow

Six months after swapping back over to HEY for email feels like the appropriate time to check in on how it’s going. Here are the ways I use HEY to organize my email; what works and what doesn't.

My workflow

I read every email that finds its way into my inbox. I hate unread emails, and I especially hate the # unread counter that most other email platforms surface within their tab titles. It unnerves me to an unhealthy degree.

That doesn't mean that I categorize every email into a special folder or label to get it out of my inbox. HEY doesn't even support this workflow, it lacks the notion of folders. Instead, read emails that I don't immediately delete simply pile up in the inbox and are covered with an image.

HEY claims that their email client is "countless"[1], in that there are no numbers telling you how many emails are in your inbox or how far you're behind in your organizational duties. And for the most part, that's true, except for one glaring counter that tells you how many unscreened emails are awaiting your approval:

HEY Screener counter

Not exactly "countless" but at least the screener is only relevant for emails from unrecognized senders.

Back on the topic of emails flowing into my inbox, most transactional emails find their way into the Paper Trail automatically. Receipts of this kind are bundled up and kept out of sight, out of mind.

Other emails that I want to draw temporary importance to reside in one of the two inbox drawers, Set Aside or Reply Later. I use Set Aside for shipping notifications, reservations, and other emails that are only relevant for a short period of time. Reply Later is self-evident. The system is very simple and works the way HEY intends.

My favorite HEY feature is easily The Feed, which aggregates newsletters into a single page. In a world where Substack has convinced every blogger that newsletters are the correct way to distribute their thoughts, The Feed is a great platform for aggregation. Shout-out to JavaScript Weekly and Ruby Weekly.

The Feed, Paper Trail, Set Aside, and Reply Later make up the bulk of my daily workflow in HEY. I'm very happy with these tools and while they are largely achievable via application of filters, labels, and rules in other inbox systems, I find the experience in HEY to be an improvement thanks to its email client and UI.

A few other HEY tools fit into more niche use-cases.

Collections are essentially threads of threads. They're similar to labels, but have the added benefit of aggregating attachments to the top of the page. I tend to use them for travel plans because they provide easy access to boarding passes or receipts.

HEY Collections

On the topic of travel, Clips are amazing for Airbnb door codes, addresses, or other key information that often finds itself buried in email marketing fluff. Instead of keeping the email in the Set Aside drawer and digging into it every time you need to retrieve a bit of information, simply highlight the relevant text and save it to a Clip.

HEY for domains, while severely limited in its lack of support for multiple custom domains, at least allows for email extensions. I use [email protected] to automatically tag incoming email with the "reimburse" label so I can later retrieve it for my company's reimbursement systems.

Important missing features

HEY is missing a couple of crucial features that I replace with free alternatives.

The first is allowing multiple custom domains, a feature of Fastmail that I dearly miss. I have a few side projects that live on separate domains and I would prefer those projects to have email contacts matching said domain. If I wanted to achieve this with HEY, I'd have to pay an additional $12/mo per domain which is prohibitively expensive[2].

Instead of creating multiple HEY accounts for multiple domains, I use email forwarding to point my other custom domains towards my single HEY account. Forward Email is one such service, which offers free email forwarding at the cost of denoting the DNS records in plain text (you pay extra for encryption). Another option I haven't investigated is Cloudflare Email Routing, which may be more convenient if Cloudflare doubles as your domain registrar.

It's a bummer that I can't configure email forwarding for custom domains within HEY itself, as I can with Fastmail.

The other big missing feature of HEY is masked email.

Fastmail partners with 1Password to offer randomly-generated email addresses that point to a generic @fastmail domain instead of your personal domain. This is such a useful (and critical) feature for keeping a clean inbox, since many newsletter sign-ups or point-of-sale devices (looking at you, Toast) that collect your email have a tendency to spam without consent. With masked email, you have the guarantee that if your masked email address gets out in the wild it can be trivially destroyed with no link back to your other email addresses.

Luckily, DuckDuckGo has their own masked email service and it’s totally free: DuckDuckGo Email Protection. The trade-off is a one-time download of the DuckDuckGo browser extension that you can remove afterwards.

Both of these features make me wish that HEY was more invested in privacy and security. They have a couple of great features that already veer in that direction, like tracking-pixel elimination and the entire concept of the Screener, but they haven't added any new privacy features since the platform launched.

Problem areas

Generally speaking, the Screener is one of the killer features of HEY. Preventing unknown senders from dropping email directly into your inbox is really nice. It does come with a couple of trade-offs, however.

For one, joining a mailing list means constant triage of Screener requests. Every personal email of every participant on that mailing list must be manually screened. HEY created the Speakeasy code as a pseudo workaround, but it doesn't solve the mailing list issue because it requires a special code in the subject line of an email.

The second problem with the Screener is pollution of your contact list. When you screen an email into your inbox, you add that email address to your contacts. That means your contact list export (which you may create if you migrate email platforms) is cluttered with truckloads of no-reply email addresses, since many services use no-reply senders for OTP or transactional emails.

When I originally migrated off of HEY to Fastmail a few years ago (before coming back) I wrote a script that ran through my contacts archive and removed no-reply domains with regular expressions. Instead, I wish that allowed senders were simply stored in a location separate from my email contacts.

The other pain point is around the HEY pricing structure. HEY is divided into two products: HEY for You, which provides an @hey.com email address, and HEY for Domains, which allows a single custom domain and some extra features. The problem is that these two products are mutually exclusive.

By using HEY for Domains, I do not have access to an @hey.com email address, a HEY World blog, or the ability to take personal notes on email threads. If I wanted these features in addition to a custom domain, I'd need to pay for both HEY products and manage two separate accounts in my email inbox (of which I want to do neither).

The split in pricing is made even worse because the extra features offered by Hey for Domains all revolve around team accounts, e.g. multi-user companies. For a single HEY user, the HEY for You features are more appealing.

This creates an awkward pricing dynamic for a single-user HEY experience. The product that I actually want is HEY for You with a single custom domain that maps both emails to a single account. The @hey.com email address should be a freebie for HEY for Domain users, as it is with alternative email providers.

I still like it though

Since the last two sections have been dwelling a bit on the negatives, I'll end by saying that I still think HEY is a good product. Not every feature is going to resonate with every individual (there's a good amount of fluff), but the features that do resonate makes HEY feel like personally-crafted software.


  1. HEY talks about their general philosophy here. ↩︎

  2. It's worth noting that the HEY for Domains pricing scheme is intended for multiple users. HEY for Domains used to be branded as "HEY for Work", if that's any indication of where the pricing awkwardness comes from. ↩︎

Visualizing Bracket City Puzzles

11 Apr, 2025 by Graham Marlow in puzzles, javascript

Lately I've been addicted to a new daily word puzzle game called Bracket City. It's unique among competitors because the game isn't about rearranging letters baked in hidden information, but rather solving hand-written, crossword-style clues.

I recommend giving the daily puzzle a shot before reading the rest of this article since it will help with visualizing the puzzle format. But as a quick rules summary:

  • A Bracket City solution is a short phrase
  • Certain words are substituted with clues, indicated via a pair of square brackets
  • Clues can nest other clues
  • You must solve the inner-most clues before you can solve the outer-most

Since Bracket City is basically a recursive crossword, the structure of a puzzle is easily mapped to a tree. And so, in classic programmer-brain fashion, I built a little app that turns a Bracket City puzzle into an interactive tree.

How it works

I had a couple of realizations while working on this little project.

The first was recognizing how brilliant the Bracket City puzzle structure is. Not only does it spin the age-old crossword in a compelling way that feels fresh, but the actual mechanics for constructing a Bracket City puzzle are super simple. It's a win in all categories, excellence in design.[1]

The second realization was how easy it is to parse Bracket City puzzles into trees and render them via Svelte components. I haven't done much work with Svelte, but the ability to recursively render a component by simply self-referencing that component is incredibly expressive.

If you're unfamiliar with Svelte, don't worry! There's really not that much special Svelte stuff going on in my solution. Most of it is plain old JavaScript.

First thing's first: a class for nodes in our tree:

class Node {
  constructor(text = '', children = []) {
    this.text = text
    this.children = children
  }
}

Next, the parsing algorithm.

The basic strategy has a function read through the input string one character at a time. When a "[" is encountered, a new node is created. A couple variables track our position in the resulting tree:

  • currentNode points to the most recent node
  • stack holds a list of nodes in order

With currentNode, we can easily append new child nodes to our position in the tree. With stack, we can exit the currentNode and navigate upwards in the tree to the node's parent.

Here's the algorithm in full:

const parsePuzzle = (raw) => {
  // Initial output takes the form of a single node.
  const root = new Node()
  let currentNode = root
  let stack = [root]

  for (let i = 0; i < raw.length; i++) {
    const char = raw[i]

    if (char === '[') {
      // Substitutions are marked with ??.
      currentNode.text += '??'
      const node = new Node()
      currentNode.children.push(node)
      stack.push(node)
      // Update our currentNode context so that future nodes
      // are appended to the most recent one.
      currentNode = node
    } else if (char === ']') {
      if (stack.length > 1) {
        // Closing brace encountered, so we can bump the
        // currentNode context up the tree by a single node.
        stack.pop()
        currentNode = stack[stack.length - 1]
      }
    } else {
      currentNode.text += char
    }
  }

  // If we have any elements left over, there's a missing closing
  // brace in the input.
  if (stack.length > 1) {
    return [false, root]
  }

  return [true, root]
}

The return result of the function denotes whether or not it was successful followed by the resulting tree, a simple form of error handling.

In Svelte, we can tie this algorithm together with an HTML textarea in a component like so:

<script>
  import parsePuzzle from '$lib/parsePuzzle.js'

  let puzzle = $state('')
  let [_, tree] = $derived(parsePuzzle(puzzle))
  $inspect(tree)
</script>

<textarea bind:value="{puzzle}"></textarea>

And using the tutorial puzzle as an example,

# raw input:
[where [opposite of clean] dishes pile up] or [exercise in a [game played with a cue ball]]

# tree:
Node(
  "?? or ??",
  [
    Node(
      "where ?? dishes pile up",
      [
        Node("opposite of clean", [])
      ]
    ),
    Node(
      "exercise in a ??",
      [
        Node("game played with a cue ball", [])
      ]
    )
  ]
)

As the textarea is updated, $inspect logs the resulting tree. We haven't yet rendered the tree in the actual UI. Let's change that.

First, update the original component to include a new component named Tree:

<script>
  import parsePuzzle from '$lib/parsePuzzle.js'
  import Tree from '$lib/components/Tree.svelte'

  let puzzle = $state('')
  let [success, tree] = $derived(parsePuzzle(puzzle))
</script>

<textarea bind:value="{puzzle}"></textarea>

{#if success}
<Tree nodes="{[tree]}" />
{:else}
<p>Error: brackets are unbalanced</p>
{/if}

Creating a new component to handle rendering the puzzle tree is not just to tidy up the code, it's to enable a bit of fancy self-referential Svelte behavior. Intro CS courses have taught us that tree structures map nicely to recursive algorithms and it's no different when we think about UI components in Svelte. Svelte allows components to import themselves as a form of recursive rendering.

Here's the Tree component in full:

<script>
  import Self from './Tree.svelte'

  const { nodes } = $props()
</script>

{#each nodes as node}
<div>
  <div>{node.text}</div>

  <div class="ml-4">
    {#if node.children.length > 0}
    <Self nodes="{node.children}" />
    {/if}
  </div>
</div>
{/each}

How about that? A Svelte component can render itself by simply importing itself as a regular old Svelte file. In the template content of the component, we simply map over our list of nodes and render their text content. If a given node has children, we use a Self reference to repeat the same process from the viewpoint of the children.

ml-4 applies left-margin to each of the children nodes, enabling stair-like nesting throughout the tree. We never need to increment the margin in subsequent child nodes because the document box model handles the hard work for us. Each margin is relative to its container, which itself uses the same margin indentation.

That about wraps it up! I added a couple extra features to the final version, namely the ability to show/hide individual nodes in the tree. I'll leave that as an exercise for the reader.


  1. Well, there is one thing that is maybe questionable about the design of Bracket City. The layout of the puzzle makes you really want to solve the outer-most clue before the inner-most, if you know the answer. However the puzzle forces you to solve the inner-most clues first. This is a surprisingly controversial design choice! ↩︎

Onboarding a new Mac

05 Apr, 2025 by Graham Marlow in til

My process for onboarding a new Mac:

  1. Remove all of the apps from the default dock. Move the dock to the righthand side and set to minimize automatically.
  2. Rebind Caps Lock as Control via Settings->Keyboard->Modifier Keys.
  3. Install the usual software:
  4. Install git by opening Alacritty, attempting to call git, and accepting the xcode-select tool installation.
  5. Install must-have brew formulae:
    • brew install helix tmux ripgrep npm rbenv
  6. Configure a Github SSH key
  7. Bring over dotfiles for Alacritty, Helix, tmux, git, etc. I don't have a good workflow for this yet but I'm investigating GNU Stow.

I probably forgot a thing or two, but this list accounts for some 90% of the tools I use in the day-to-day.

Ruby and RSS feeds

30 Mar, 2025 by Graham Marlow in til

I've been digging into Ruby's stdlib RSS parser for a side project and am very impressed by the overall experience. Here's how easy it is to get started:

require "open-uri"
require "rss"

feed = URI.open("https://jvns.ca/atom.xml") do |raw|
  RSS::Parser.parse(raw)
end

That said, doing something interesting with the resulting feed is not quite so simple.

For one, you can't just support RSS. Atom is a more recent standard used by many blogs (although I think irrelevant in the world of podcasts). There's about a 50% split in the use of RSS and Atom in the tiny list of feeds that I follow, so a feed reader must handle both formats.

Adding Atom support introduces an extra branch to our snippet:

URI.open("https://jvns.ca/atom.xml") do |raw|
  feed = RSS::Parser.parse(raw)

  title = case feed
  when RSS::Rss
    feed.channel.title
  when RSS::Atom::Feed
    feed.title.content
  end
end

The need to handle both standards independently is kind of frustrating.

That said, it does make sense from a library perspective. The RSS gem is principally concerned with parsing XML per the RSS and Atom standards, returning objects that correspond one-to-one. Any conveniences for general feed reading are left to the application.

Wrapping the RSS gem in another class helps encapsulate differences in standards:

class FeedReader
  attr_reader :title

  def initialize(url)
    @url = url
  end

  def fetch
    feed = URI.open(@url) { |r| RSS::Parser.parse(r) }

    case feed
    when RSS::Rss
      @title = feed.channel.title
    when RSS::Atom::Feed
      @title = feed.title.content
    end
  end
end

Worse than dealing with competing standards is the fact that not everyone publishes the content of an article as part of their feed. Many bloggers only use RSS as a link aggregator that points subscribers to their webpage, omitting the content entirely:

<rss version="2.0">
  <channel>
    <title>Redacted Blog</title>
    <link>https://www.redacted.io</link>
    <description>This is my blog</description>
    <item>
      <title>Article title goes here</title>
      <link>https://www.redacted.io/this-is-my-blog</link>
      <pubDate>Thu, 25 Jul 2024 00:00:00 GMT</pubDate>
      <!-- No content! -->
    </item>
  </channel>
</rss>

How do RSS readers handle this situation? The solution varies based on the app.

The two I've tested, NetNewsWire and Readwise Reader, manage to include the entire article content in the app, despite the RSS feed omitting it (assuming no paywalls). My guess is these services make an HTTP request to the source, scraping the resulting HTML for the article content and ignoring everything else.

Firefox users are likely familiar with a feature called Reader View that transforms a webpage into its bare-minimum content. All of the layout elements are removed in favor of highlighting the text of the page. The JS library that Firefox uses is open source on their Github: mozilla/readability.

On the Ruby side of things there's a handy port called ruby-readability that we can use to extract omitted article content directly from the associated website:

require "ruby-readability"

URI.open("https://jvns.ca/atom.xml") do |raw|
  feed = RSS::Parser.parse(raw)

  url = case feed
  when RSS::Rss
    feed.items.first.link
  when RSS::Atom::Feed
    feed.entries.first.link.href
  end

  # Raw HTML content
  source = URI.parse(url).read
  # Just the article HTML content
  article_content = Readability::Document.new(source).content
end

So far the results are good, but I haven't tested it on many blogs.

Reminiscing on Flow

01 Mar, 2025 by Graham Marlow in javascript

(The type-checker, not the state of deep work)

React's recent sunsetting of Create React App has me feeling nostalgic.

My first experience with a production web application was a React ecommerce site built with Create React App. I came into the team with zero React experience, hot off of some Angular 2 work and eager to dive into a less-opinionated framework. The year was 2018 and the team (on the frontend, just two of us) was handed the keys to a brand new project that we could scaffold using whatever tools we thought best fit the job.

We knew we wanted to build something with React, but debated two alternative starting templates:

  1. Create React App (then, newly released) with Flow

  2. One of the many community-maintained templates with TypeScript

You might be surprised that Create React App didn't originally come bundled with TypeScript[1], but the ecosystem was at a very different place back in 2018. Instead, the default type-checker for React applications was Flow, Facebook's own type-checking framework.

After a couple prototypes, we chose Flow. It felt like a safer bet, since it was built by the same company as the JavaScript framework that powered our app. Flow also handled some React-isms more gracefully than TypeScript, particularly higher-order components where integrations with third-party libraries (e.g. React Router, Redux) led to very complicated scenarios with generics.

Of all of our stack choices at the start of this project in 2018, choosing Flow is the one that aged the worst. Today, TypeScript is so ubiquitous that removing it from your open source project incites a community outrage[2]. Why is TypeScript widely accepted as the de facto way to write JavaScript apps, whereas Flow never took off?

npmtrends: Flow vs. TypeScript

I chalk it up to a few different reasons:

  • TypeScript being a superset of JavaScript allowed early adopters to take advantage of JavaScript class features (and advanced proposals, like decorators). In a pre-hooks era, both Angular and React required class syntax for components and the community seemed to widely support using TypeScript as a language superset as opposed to just a type-checker.

  • Full adoption by Angular 2 led to lots of community-driven support for TypeScript types accompanying major libraries via DefinitelyTyped. Meanwhile nobody really used Flow outside of React.

  • Flow alienated users by shipping broad, wide-sweeping breaking changes on a regular cadence. Maintaining a Flow application felt like being subject to Facebook's whims. Whatever large refactor project was going on at Facebook at the time felt like it directly impacted your app.

  • VSCode has become the standard text editor for new developers and it ships with built-in support for TypeScript.

TypeScript as a language superset

Philosophically, in 2018 the goals of Flow and TypeScript were quite different. TypeScript wasn't afraid to impose a runtime cost on your application to achieve certain features, like enums and decorators. These features required that your build pipeline either used the TypeScript compiler (which was, and is, incredibly slow) or clobbered together a heaping handful of Babel plugins.

On the other hand, Flow promised to be just JavaScript with types, never making its way into your actual production JavaScript bundle. Since Flow wasn't a superset of JavaScript, it was simple to set up with existing build pipelines. Just strip the types from the code and you're good to go.

Back when JavaScript frameworks were class-based (riding on the hype from ES2015), I think developers were more receptive towards bundling in additional language features as part of the normal build pipeline. It was not uncommon to have a handful of polyfills and experimental language features in every large JavaScript project. TypeScript embraced this methodology, simplifying the bundling process by offering support in the TypeScript compiler proper.

Nowadays the stance between the two tools has reversed. The adoption of alternative bundlers that cannot use the TypeScript compiler (esbuild, SWC, and so on) has meant that JavaScript developers are much less likely to make use of TypeScript-specific features. People generally seem less receptive towards TypeScript-specific features (e.g. enums) if they're easily replaced by a zero-cost alternative (union types). Meanwhile, recent Flow releases added support for enums and React-specific component syntax[3]. What a reversal!

Community library support

As TypeScript gathered mindshare among JavaScript developers, DefinitelyTyped crushed FlowTyped in terms of open source contribution. By the tail end of 2021, our small team had to maintain quite a few of our own forks of FlowTyped files for many common React libraries (including React Router and Redux)[4]. Flow definitely felt like an afterthought for open source library developers.

As TypeScript standardized with npm under the @types namespace, FlowTyped still required a separate CLI. It's not easy to compete when the alternative makes installing types as easy as npm install @types/my-package.

Breaking things

I remember distinctly that upgrading Flow to new releases was such a drag. Not only that, but it was a regular occurrence. New Flow releases brought wide-sweeping changes, often with new syntax and many deprecations. This problem was so well-known in the community that Flow actually released a blog post on the subject in 2019: Upgrading Flow Codebases.

For the most part, I don't mind if improvements to Flow means new violations in my existing codebase pointing to legitimate issues. What I do mind is that many of these problematic Flow releases felt more like Flow rearchitecting itself around fundamental issues that propagated down to users as new syntax requirements. It did not often feel like the cost to upgrade matched the benefit to my codebase.

A couple examples that I still remember nearly 6 years later:

LSP, tooling, and the rise of VSCode

In the early days, the Flow language server was on par with TypeScript's. Both tools were newly emerging and often ran into issues that required restarting the language server to re-index your codebase.

VSCode was not as ubiquitous in those days as it is today, though it was definitely an emerging star. Facebook was actually working on its own IDE at the time, built on top of Atom. Nuclide promised deep integration with Flow and React, and gathered a ton of excitement from our team. Too bad it was retired in December of 2018.

As time went on and adoption of VSCode skyrocketed, Flow support lagged behind. The TypeScript language server made huge improvements in consistency and stability and was pre-installed in every VSCode installation. Meanwhile Flow crashed with any dependency change, and installing the Flow extension involves digging into your built-in VSCode settings and disabling JavaScript/TypeScript language support.

Towards TypeScript

As our Flow application grew from 3-month unicorn to 3-year grizzled veteran, Flow really started to wear developers on our team down. It was a constant onboarding pain as developers struggled to set up VSCode and cope with some of the Flow language server idiosyncrasies. Refactoring to TypeScript was an inevitable conversation repeated with every new hire.

The point of this blog post is not to bag on Flow. I still have a ton of respect for the project and its original goal of simplicity: "JavaScript with types". Although that goals lives on via JSDoc, Flow is an important milestone to remember as type annotations are formally discussed by TC39.

Before leaving the company, I remember tasking out a large project detailing the entire process of converting our Flow codebase to TypeScript. I wonder if it was ever finished.


  1. TypeScript support was added in 2019 with the v2 release. ↩︎

  2. For another example, see Svelte's move from TypeScript to JSDoc. ↩︎

  3. The move away from "JavaScript with types" is documented in this blog post: Clarity on Flow’s Direction and Open Source Engagement. ↩︎

  4. If you've never looked at one of the type files for some of your favorite libraries, they can be rather cryptic. ↩︎