Rat Traps

Sheon Han

Does the rationalist blogosphere need to update?

There’s a life cycle to a genre. A classic version follows a predictable path: invention by a few early practitioners, followed by maturity, death, and, if lucky, rebirth. If the literary scholar Franco Moretti is to be believed — and he had the pretty graphs to show it 1 — this happens every 25 to 30 years. Alternatively, French film theorist Christian Metz offered four stages: experimental, classic, parody, and deconstruction. (Consider what Pulp Fiction did to film noir.)

I’d say there’s a more inglorious cycle: birth, promise, imitation, and decline. Especially vulnerable are genres that allow easy entry and whose appeal partially hinges on cleverness. Soon after their inception, eager fans submit works that aren’t so much mimesis as mimicry, unwitting self-parodies that rely on the genre’s tics and tropes. With each repetition, the initial charm erodes, and a sort of genre fatigue sets in. Next, critics pounce. Autofiction, cli-fi, and the personal essay, for example, have recently been scrutinized with prosecutorial glee. But to be fair, the relationship isn’t purely adversarial. Whether it’s a literary trend or a concrete slurry, occasional stirring is necessary for it to be any good.

In the mid to late 2000s, a new genre of blogs, which I’ll call rationalist writing, emerged on the periphery of the internet media landscape. Despite its outsized influence in Silicon Valley, the genre has escaped its long-overdue assessment. What cycle — and which stage — is it in?

The origin of rationalist writing is commonly traced back to the comments section of Overcoming Bias, a group blog about cognitive biases and related topics that is now the personal blog of economist Robin Hanson. Eliezer Yudkowsky, a prominent contributor, spun off an online forum called LessWrong in 2009, dedicated to the practice of “applied rationality.” Within a few years, its top writers — for example, Scott Alexander, Katja Grace, Luke Muehlhauser — had launched their own blogs, forming what became the rationalist blogosphere.

The internet of the early aughts was a warm petri dish for blogging. Compared to its contemporaries, early rationalist writings were like Crooked Timber but more left-brained, Marginal Revolution but more subcultural, and 3 Quarks Daily but weirder. Topically, there was a resemblance to Aaron Swartz’s blog archive Raw Thought — select a random article and you might find anything from a diary entry to a policy memo, a technical specification, or a manifesto.

On LessWrong, their de facto common room, rationalists theorized about the philosophy of language or wrote mini-treatises on decision theory. They were fond of tidy taxonomies (“16 Types of Useful Predictions”) and folk ethnographies (“Intellectual Hipsters and Meta-Contrarianism”); epigrammatic laws (“Goodhart’s Law”) and twists on existing concepts (“Pascal’s Mugging”). They competed to write more grokkable explainers for mathematical theorems (e.g., cartoon guides to Löb’s Theorem), and unresolved issues were taken up again in LaTeX-enabled comment sections. They pored over arXiv preprints with a Talmudist’s devotion. (The same obsession would later pay off during the onset of COVID-19, as the community grasped the gravity of what was to come while the rest of the world downplayed its significance.)

Form-wise, it was a distinctly twenty-first-century hybrid. Long walls of text were punctuated by hand-drawn memes and screenshots of matplotlib charts from Jupyter notebooks. Featuring names that could double as post-rock bands or indietronica acts (e.g., “Melting Asphalt,” “Minding Our Way”), most such blogs had no institutional affiliation. Yet many authors published with relentless frequency, as if tenure depended on it. Even when you questioned the individual points, when it worked — which it did surprisingly often — it made for a galvanizing read.

When discussing political issues, however, it was a mixed bag. Some rationalists applied a level of forensic detail that rivaled even the wonkiest of political blogs, as if Ezra Klein was also passionate about nootropics or cryonics. Others were prone to grand, dorm-room theorizing — the reimagining-society-from-scratch type — common among STEM folks making their first foray into anything having to do with politics. 

Another subgenre was quantified self-help, such as a 30,000-plus-word “overview” on spaced repetition — a technique to enhance long-term retention of information. Also popular were personal development and productivity advice written as a service to the community — “I did it so you don’t have to” — but you could see a glimmer of pride shining through their professed frustration. Nothing out of the ordinary there, however. Blogging, no matter how technical the subject area, is never a purely expository form but an inherently performative one.

One might be tempted to dismiss them as a coalition of para-academics who were simply “columbusing” — rediscovering ideas that were old news in academia. Nevertheless, the best of them were highly serviceable retailers of knowledge. The book reviews collectively generated by the community could be more substantive, if verbose, than those in mainstream outlets. (And if more people debated Lakatos and Feyerabend, even if they missed some nuances, it still seemed preferable to the internet’s usual fare of hot takes on the latest trends.)

As is the case now and then, they seemed very, very concerned about AI.

To this day, for those of us who move within liberal or progressive circles, openly admitting one’s readership of the rationalist genre can feel like a social misstep. For one, the rationalist community has the reputation of leaning libertarian. But according to the 2024 ACX survey — the closest thing to a census of the community — only about 20% identify as libertarian, while 35% identify as “liberal” and 30% as “social democratic.”

But what still makes it harder to openly endorse the community as a whole is likely that there are self-described members who seriously engage with insidious ideas, such as giving Nick Land, the so-called godfather of accelerationism, more proper treatment than he deserves. This is, of course, a risk inherent to open membership: the bottom tranche of the fan base can end up representing the entire community. This has led some critics to treat rationalist blogs as a trapdoor leading to odious movements like NRx and, more recently, e/acc. (If you understand what they are, I'm sorry. If not, just know that they both achieved the remarkable distinction of being not just ideologically tacky but orthographically so.) 

From when I first discovered the rationalist blogosphere in 2015 until it began to grow tedious (more on that later), it was a proportionally small but stimulating portion of my media diet. I consumed it like a psychoactive pill, one that made me slightly insane but alert to other kinds of insanities. I found it to be frequently insightful, sometimes obvious, often annoying and rather pompous, regularly helpful, and — for a group lampooned for its emotional aridity — unexpectedly vulnerable.

How so? Rationalists seemed like introverts but not misanthropes. In fact, they were openly interested in studying fellow humans at a sociological remove. Like animatronic Erving Goffmans, they sought a mechanistic understanding of social interactions. Popular posts dissected the dynamics of social status (e.g., “The red paperclip theory of status”). Unlikely works like Impro, a book on improvisational theater, served as reference texts for cultivating winsome personas. 

Outsiders often lump all denizens of Silicon Valley together, but the taxonomy is more nuanced. What I appreciated about rationalists  — the most discerning ones, at least — was that they seemed to avoid the pitfalls of other garden-variety tech bros: falling for the likes of the halfwit hucksters of Silicon Valley (the Winklevoss twins), the Pied Piper of misguided youths (Jordan Peterson), brokers of noxious doctrines (Mencius Moldbug), traffickers of pseudo-profundities (Naval Ravikant), and the dipshit scumbags of the All-In Podcast. (Though some seemed to be working through their complicated feelings about Peter Thiel.)

Discovering LessWrong felt like stepping into the back alley of the internet. I don’t recall how I stumbled upon it, but I remembered thinking that the name sounded cheeky. Not long after my headlong entry into it, I was soon led down the rabbit hole of (what would come to be called) “The Sequences,” foundational texts of the community, written by a late-twenties Eliezer Yudkowsky. 2 Although The Sequences employed the language of science, there was something occultic about it.

Thematically, it was a chimeric collection: “The Metaethics Sequence,” “The Quantum Physics Sequence,” and “Highly Advanced Epistemology 101 for Beginners.” The later parts, such as “Yudkowsky’s Coming of Age,” drew an achingly personal, if self-mythologizing, portrait of the artist as a young rationalist. Reading what can be described as an erudite juvenilia, I could see how his outré opinions about AI or his enthusiastic injection of phrases straight out of anime — one title read, “Tsuyoku Naritai! (I Want To Become Stronger)” — could make him liable to caricature. But Yudkowsky, a high school dropout, seemed to have achieved what few people can with an autodidactic education.

I started with the section titled “How to Actually Change Your Mind,” which had a strange Rilkean ring to it, perhaps drawing the polymathically curious members of the anglophone internet the way the call to change your life must have tugged at the heartstrings of a good many sad boys of fin-de-siècle Vienna.

What drew me into The Sequences? I’m that cliched archetype of a person for whom reading Douglas Hofstadter’s Gödel, Escher, Bach in high school was an indelible conversion experience. This is to say, anything merging humanities and science, even a higgledy-piggledy version of it, was pure catnip. While it would be wrong to call The Sequences “artful,” it was upheld by a GEB-esque aesthetic that I couldn’t get from reading, say, Granta or Bookforum.

While I was never compelled to attend any rationalist meetups in person nor was I part of the community on LessWrong, after reading The Sequences, I started following a few rationalist and rationalist-adjacent blogs. Generally, there was an adherence to the positivist ideal and to utilitarianism, but even when I disagreed with individual views — to their credit, rationalists are masochists who love self-criticism — there was a shared understanding that ideology isn’t a linear spectrum but a vector space where one could carve out a subspace of beliefs instead of purchasing a wholesale package of views. In the 2015 American political landscape, this was not a widely available option. 

Curiously, as the rationalist blogosphere seemed to expand over the years, it continued to obey a kind of twin Earth metaphysics, as if the internet contained a phantom dimension invisible to the mainstream media. My otherwise well-read and very online friends seemed unaware of it. (I assume the readership of Ribbonfarm has a near-perfect negative correlation with the viewership of HBO’s Girls.) Even as they were becoming an influential force in Silicon Valley, they received scant media coverage. 3 But if you were paying attention, you could almost see, frame by frame, the developments that led to the current moment in AI (e.g., the group of people who coalesced into Anthropic). If you ever notice someone using rationalist watchwords like “Bayesian” and thanking their friends for “reading a draft of this essay” in a blog post, 4 there is a good chance the author was a part of this world.  

Over time, however, my interest in rationalist content waned. It was becoming a sclerotic genre that had coagulated into a clump of cliches. Surely there’s money to be made from selling a bingo card based on the rationalist glossary, much of which has leaked into Silicon Valley at large: “updating,” “inferential distance,” “nutpicking,” “ugh field” “over-indexed,” “orthogonal,” “legibility,” and, of course, “Bayesian.” The problem wasn’t necessarily with the concepts themselves or their popularizers, but with the members who parroted them. Much like “late capitalism” or “lived experience,” they became once-serviceable terms now drained of meaning. Use it once, it’s a technique; twice, a pattern; thrice, a gimmick; and henceforth, a cliche. (Every time I come across a non-ACX blog post declaring its “epistemic status,” I reach for an EpiPen.)

While writing this piece, I revisited LessWrong for the first time in many years and found it now emblazoned with AI-generated images, all in a kitschy, techno-futuristic style. What on Moloch’s Earth happened here?

Reading The Sequences now hits differently. I felt the way a character from Hanif Kureishi’s novel feels about Jack Kerouac: “The cruelest thing you can do to Kerouac is reread him at thirty-eight.” Yudkowsky appeared to be still blogging in the forum. I clicked the latest post of his, crossposted on Twitter, titled “Universal Basic Income and Poverty.” It begins: 

I’m skeptical that Universal Basic Income can get rid of grinding poverty, since somehow humanity’s 100-fold productivity increase (since the days of agriculture) didn’t eliminate poverty. 

Some of my friends reply, “What do you mean, poverty is still around? ‘Poor’ people today, in Western countries, have a lot to legitimately be miserable about, don’t get me wrong; but…”

Apparently, Yudkowsky has friends who pose questions in a way that nicely serves his rhetorical needs, in an unbroken paragraph of 181 words, no less. To this misguided chorus, Yudkowsky gears up to unleash his insights: “And this is a sensible question, but let me try out a new answer to it.” 

Consider the imaginary society of Anoxistan, in which every citizen who can’t afford better lives in a government-provided 1,000 square-meter apartment; which the government can afford to provide as a fallback, because building skyscrapers is legal in Anoxistan. Anoxistan has free high-quality food…

What follows is less a serious answer engaging with the literature on UBI and more a didactic one-man show. Yudkowsky presents his ideas with scant empirical backing or practical insights while occasionally condescending to whom he calls “my (quite sensible and reasonable) friends.” The second latest post, “‘Empiricism!’ as Anti-Epistemology,” which is approximately 7700 words long (twice the length of this piece), can be summed up as a contrived Socratic dialogue about the tension between empirical observation and theoretical reasoning. It was only after I went through the whole damn thing that I discovered the top comment, which pointed to the paragraph that “summarize[s] most of this post.” After quoting the said paragraph, the commenter concludes, “I’m not sure the post says sufficiently many other things to justify its length.”

Yudkowsky likes to reason out loud — step by step and in a systematic way. At its best, this style of writing can be deeply satisfying: the ideas proceed neatly, one after the other, like dominos falling into place. It's a talent — much like that of a deft science journalist — and the fact that random users in the late aughts were courted to reading such abstruse topics instead of other trivialities on the web, is a testament to it. The final product can be unexpected, interesting, or at least interestingly wrong — or it can lead him to reinvent the wheel. His autodidacticism — and his rigid commitment to this style — is both a strength and a weakness: the same quality that makes his work distinctive also leads to a certain condescension and insularity. 

The amateurism of The Sequences could be forgiven, considering it was written by a twentysomething Yudkowsky. But it is disappointing to see that he never grew out of this pontificating style. This might still work for those who find him at an impressionable age, but Yudkowsky's reliance on these tactics has been going on for nearly two decades.

If the maneuvers that he pulls seem oddly familiar to you, it’s because many have been adopted by the bulk of today’s rationalist bloggers and posters: they adopt a sober tone that seems to offer a clear-eyed, no-nonsense analysis; they are dutifully skeptical, laboriously balanced, and crowded with noisy numbers. There’s a barrage of new coinage — e.g., “Poverty Equilibrium” and “Bernie Bankman” from Yudkowsky’s recent pieces — hoping to be caught on.

You might think that I’m quibbling about what’s merely a matter of style. But style is not about stitching together pretty sentences. Often the prime effect of style — rationalists must understand — is not aesthetic but epistemic.

I don’t need to tell rationalists that changing someone’s mind has little to do with the persuasiveness of arguments. To explain in rationalese, I suspect that our minds are in a state of epistemic inertia when core beliefs are challenged. In other words, no matter how much you bludgeon your brain with new evidence, beliefs cannot be updated without overcoming that inertia first. Some ways to achieve this are through narratives or or through authority — consider how the advice you dismissed when it came from your roommate suddenly seems sensible when it comes from your favorite author.

Style is another way to break inertia. Consider the difference between “Fame can change you” and, to use a classic line, “Celebrity is a mask that eats into the face.” The first won’t even register on the ears, but the second may help you see such a banal — but true — statement anew. Truths, upon repetition in the same formulation, start sounding like truisms. Much like how neurons, after repeated stimuli, need a higher threshold to fire, a truism blends into the noise. Stylistic and formal experimentation is like introducing randomness and entropy to disrupt the epistemic logjam. 

Part of Slate Star Codex’s initial success, I think, lies in its style. Even the “epistemic status” signposting, until it was pillaged by fans, was a novel technique. The same goes for Scott Aaronson’s use of parables to explain theoretical computer science, and for the rhapsodic excursuses in Alexey Guzey’s bombshell exposé of Why We Sleep. Another successful formalistic gambit is the rationalistrix Aella’s Substack post “My Birthday Gang Bang” an astounding piece of service-cum-stunt journalism, with operational logistics that would put the US Secret Service to shame.

Being vigilant about language, so that arguments don’t sound stale, matters because taking in new information is often less important than re-registering truths. Scott Alexander’s popular essay, “I Can Tolerate Anything Except the Outgroup,” is, in essence, a stylized exposition of recognizable political insights that we have become dull to. To this point, I’m reminded of the final page of Lucy Grealy’s Autobiography of a Face. “I used to think truth was eternal, that once I knew, once I saw, it would be with me forever, a constant by which everything else could be measured,” she writes. “I know now that this isn’t so, that most truths are inherently unretainable, that we have to work all our lives to remember the most basic things.”

If rationalists are ambitious enough to persuade those outside their community, there's even more reason to vary their style and abstain from their usual phraseology because — I am just stating the obvious here — people are resistant to being convinced by those who don’t speak the same dialect. However, rationalist writings — like many works of postmodern scholarship — are often written in a register that has already internalized its audience.

Another issue is that today’s rationalist writings suffer from the monoculture of intellectual sources, drawn from what I personally call Silicon Valley’s shadow canon: Seeing Like a State; The Power Broker; The Revolt of the Public; Exit, Voice, and Loyalty; Reasons and Persons; Impro; and more. (If curious about what this canon contains, just triangulate the “What I’ve Read” lists by the likes of Venkatesh Rao, Patrick Collison, and Gwern Branwen.) 5

These are otherwise fine books and works of first-rate scholarship, but they have become subjected to second-rate readings. Rationalists have a tendency to lazily co-opt these ideas, with the prime offender being James C. Scott’s concept of “legibility,” which may as well be found in Notion docs detailing go-to-market strategies for AI hardware products. Encountering René Girard’s mimetic theory through Peter Thiel’s filter is another example. Aside from the shadow canon, most rationalists’ inputs are limited to a predictable constellation of blogs and podcasts. 6 This creates an ouroboric citation circle where sources are each other’s rationalist blogs and a rather pious exegesis of The Sequences and the hermeneutics of SSC/ACX.

Rationalists also need to realize that there’s a community capture going on, how it's wasting efforts on conceptually interesting — but often silly — causes that somehow became serious-coded. There’s a post titled “You have a set amount of ‘weirdness points.’ Spend them wisely.” But, to tell the truth, I don’t think rationalists were that weird to begin with; they have become weirder because the community rewards both true weirdness and gestural weirdness. Every community has its own set of conceits and reward mechanisms. As a casual bird-watcher, I see this in many birders who are, not unlike rationalists, consumed by performative monomania. In any subculture, showing off how much attention you can hold for pet topics, the ones vital to the community but trivial to others, becomes an in-group endurance sport. It's a treacherous exercise that, over time, leaves you oblivious to how truly you’re interested, deep, deep down.

The community is also facing an inevitable problem of decentralized knowledge production: redundancy, a complete lack of prioritization, and inconsistent quality control. There’s no need for the umpteenth article explaining Bayes' theorem, and sophomoric pabulum extolling the virtues of “free thinking” needs to be shown the door. Rationalist writing is at its best when the community directs its firepower toward underexplored topics — such as an introduction to container logistics or a refutation of unfounded explanations for the obesity epidemic — that could benefit from its suite of methodological techniques and niche expertise. Even many of the top bloggers could use some editorial help and triage topics to write about. In that sense — I was not paid to say this — this magazine is a welcome antidote. 7

The group that once prided itself on avoiding a hive mind is now steeped in its own kind of piety. The rationalist community is often accused of being cultish, but what it may be becoming is something more anodyne and decidedly less edgy than a cult: a fandom. What plagues the community — paging Harold Bloom — isn’t so much the anxiety of influence as the ecstasy of influence, whereby Scott Alexander becomes a father figure whose approval everyone craves. But any movement calcifies when the healthy urge for patricide is stifled. 

Seeing the state of rationalist writings in 2024, I’m reminded of how philosopher Nikhil Krishnan described the late 18th-century German Romantics: “rebels find themselves thrown into the arms of another orthodoxy,” writes Krishnan. “How different a goth looks from everyone else, and yet how similar to every other goth.” Can the rationalist community get out of this rut? Perhaps they can change. If they are true Bayesians, as they claim to be, now is the time to prove it. 

  1. Moretti, Franco. Graphs, maps, trees: Abstract models for a literary history. London: Verso, 2007.
  2. While “The Sequences” refer to Yudkowsky’s, a “sequence” is LessWrong’s formal name for a series of posts on one topic, and there are many across the site.
  3. When it finally happened, there was a high-profile snafu that led to Slate Star Codex becoming Astral Codex Ten, which I'll defer to Gideon Lewis-Kraus's piece in The New Yorker for further details.
  4. But this is likely to have originated from Paul Graham.
  5. A notable omission from the shadow canon is works of fiction that aren’t science fiction, but why that is so may be a topic for another time.
  6. Not to say there aren't exceptions, such as the proto-rationalist Robin Hanson, who has a more eclectic reading habit. And his collaboration with philosopher Agnes Callard on the podcast "Minds Almost Meeting" is a welcome attempt to bridge the Two Cultures.
  7. Well, I have been paid, but not to say this.

Sheon Han is a writer and programmer based in Palo Alto, CA. His writing has appeared in The New Yorker, WIRED, The Atlantic, The Point, Quanta Magazine, and elsewhere. You can find his other work at sheonhan.net and on Twitter @sheonhan.

Published November 2024

Have something to say? Email us at letters@asteriskmag.com.

Subscribe