The rationalist community was drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally. You would think, then, that they’d be paragons of critical thinking and skepticism — or at least that they wouldn’t wind up summoning demons.
And yet, the rationalist community has hosted perhaps half a dozen small groups with very strange beliefs (including two separate groups that wound up interacting with demons). Some — which I won’t name in this article for privacy reasons — seem to have caused no harm but bad takes. But the most famous, a loose group of vegan anarchist transhumanists nicknamed the Zizians, have been linked to six violent deaths. Other groups, while less violent, have left a trail of trauma in their wake. One is Black Lotus, a Burning Man camp led by alleged rapist Brent Dill, which developed a metaphysical system based on the tabletop roleplaying game Mage the Ascension. Another is Leverage Research, an independent research organization that became sucked into the occult and wound up as Workplace Harassment With New Age Characteristics.
For this article, I spoke to ten people who were associated with various rationalist-adjacent groups, including Black Lotus, Leverage Research, and the Zizians. I also spoke with people who were familiar with the early development of the rationalist community. I myself am a rationalist, and the rationalist community is closely knit; my interviewees included exes, former clients, and the dad of my kid’s best friend. I am close to my subject in a way most journalists aren’t. At the same time, I got an unprecedented level of access and honesty from members of a community that is often hostile to outsiders.
The problem of young rationalists
The rationalist community as a whole is remarkably functional. Like any subculture, it is rife with gossip, personality conflicts, and drama that is utterly incomprehensible to outsiders. But overall, the community’s activities are less drinking the Kool-Aid and more mutual support and vegan-inclusive summer barbeques.
Nevertheless, some groups within the community have wound up wildly dysfunctional–a term I’m using to sidestep definitional arguments about what is and isn’t a cult. And some of the blame can be put on the rationalist community’s marketing.
The Sequences make certain implicit promises. There is an art of thinking better, and we’ve figured it out. If you learn it, you can solve all your problems, become brilliant and hardworking and successful and happy, and be one of the small elite shaping not only society but the entire future of humanity.
This is, not to put too fine a point on it, not true.
Multiple interviewees remarked that the Sequences create the raw material for a cult. To his credit, their author, Eliezer Yudkowsky, shows little interest in running one. He has consistently been distant from and uninvolved in rationalist community-building efforts, from Benton House (the first rationalist group house) to today’s Lightcone Infrastructure (which hosts LessWrong, an online forum, and Lighthaven, a conference center). He surrounds himself with people who disagree with him, discourages social isolation, and rarely directs his fans to do anything other than read his BDSM-themed fanfiction.
But people who are drawn to the rationalist community by the Sequences often want to be in a cult. To be sure, no one wants to be exploited or traumatized. But they want some trustworthy authority to change the way they think until they become perfect, and then to assign them to their role in the grand plan to save humanity. They’re disappointed to discover a community made of mere mortals, with no brain tricks you can’t get from Statistics 101 and a good CBT workbook, whose approach to world problems involves a lot fewer grand plans and a lot more muddling through.
Black Lotus used a number of shared frameworks, including the roleplaying game Mage: the Ascension, that would allow them to cut through social norms and exercise true agency over their lives. Brent supposedly had the most insight into the framework, and so had a lot of control over the members of Black Lotus — control he was unable to use wisely.
However, if Brent wasn’t there, Black Lotus would have been fine. One interviewee said that, when Brent wasn’t there, Black Lotus led to beautiful peak experiences that he still cherishes: “Brent surrounded himself with people who built the thing he yearned for, missed, and couldn’t have.”
But in other cases — as in Leverage Research — the toxic dynamics emerged from the bottom up. Interviewees with experience at Leverage Research were clear that there was no single wrongdoer. Leverage was fractured into many smaller research groups, which did everything from writing articles about the grand scope of human history to incubating a cryptocurrency. Some research groups stayed basically normal to the end; others spiralled into self-perpetuating cycles of abuse. In those research groups, everyone was a victim and everyone was a perpetrator. The trainer who broke you down in a marathon six-hour debugging session was unable to sleep because of the panic attacks caused by her own.
Worse, the promise of the Sequences is more appealing to people who have very serious life problems they need desperately to solve. While some members of dysfunctional rationalist groups are rich, stable, and as neurotypical as rationalists ever get, most are in precarious life positions: mentally ill (sometimes severely), traumatized, survivors of abuse, unemployed, barely able to scrape together enough money to find a place to sleep at night in the notoriously high-rent Bay Area. Members of dysfunctional rationalist groups are particularly likely to be transgender: transgender people are often cut off by their families and may have a difficult time finding friends who accept them as they are. The dysfunctional group can feel like a safe haven from the transphobic world.
People in vulnerable positions are both more likely to wind up mistreated and less likely to be able to leave. Elizabeth Van Nostrand, who knows many members of dysfunctional groups both rationalist and non-rationalist, said, “I know people who've had very good experiences in organizations where other people had very bad ones. Sometimes different people come out of the same group with very different experiences, and one of the major differences is whether they feel secure enough to push back or leave if they need to. There isn't a substitute for a good BATNA.”
Still, vulnerability alone can’t explain why some members of the rationalist community end up in abusive groups. Mike Blume was a member of Benton House, which was intended to recruit talented young rationalists. He said, “I was totally coming out of a super depressive and dysfunctional phase in my life, and this was a big upswing in my mood and ability to do things. We were doing something really important. In retrospect, I feel like this is the sort of thing you can't do forever. You burn out on it eventually. But I would wake up in the morning and I'd be a little bit tired and trying to get out of bed and I'd be like, well, you know, the lightcone
depends on me getting out of bed and going to sleep and learning how to program. So I'd better get on that.”
Mike Blume was depressed, lonely, and unemployed when he entered the rationalist community, and he sincerely believed in both the art of rationality and the importance of the cause. The difference wasn’t his vulnerability. The difference was that his community helped him use his idealism and belief in the cause to learn real skills, become less depressed, and get to a more stable place.
One interviewee observed that the early rationalist community had been more supportive of less functional rationalists, perhaps because it was smaller. While it wasn’t capable of transforming them into a superhumanly rational elite (no one can do that), it helped them learn useful skills, and become independent. This interviewee said that, once the early rationalists became functional, they pulled the ladder up behind them. They (understandably) only wanted to hang out with people who already have their shit together. But without the support of more successful people, less functional new rationalists can be easy prey for anyone willing to offer help.
I’m not sure I agree. The early rationalist community had a number of success stories; it also had a guy that multiple people referred to, sighed, and said “that wasn’t a cult, he just did too many whippets.” The rationalist community I see provides a lot of support to many people who are neurodivergent, traumatized, or transgender; it also fails a lot of people.
Taking ideas seriously
When discussing dysfunctional or abusive groups, many academics treat their beliefs as secondary. The real coercive control comes from threats or sleep deprivation, social isolation or the manipulations of the leader; the beliefs are so many rationalizations. Most of my interviewees took great pains to say that this wasn’t true for them. While the problems were certainly exacerbated by social isolation and other interpersonal dynamics, the core reason for the dysfunctionality was the beliefs.
It’s difficult to understand the internal dynamics of the Zizians. They don’t have former members, and members tend to isolate themselves from their former friends. So anything I say about them is inherently speculative.
However, my interviewees and my own experiences are unequivocal: when the Zizians stabbed their landlord with a katana or got in a shootout with the cops, this was really actually because they believed an obscure version of decision theory that meant that you should always escalate when threatened. Social isolation likely made these beliefs seem more plausible, and their meditation techniques — including their belief that half of the brain can fall asleep while the other half remains awake, known as “hemispheric sleep” — likely made them less able to reason rationally.
But, long before weapons and violence were brought into the mix, Zizians seriously and coherently argued for their unusual version of decision theory. Their actions derive directly from beliefs they had for years. And several Zizians behaved violently without any opportunity for anyone to exercise coercive control. Most notably, Maximilian Snyder allegedly murdered Curtis Lind, who was the primary witness in the case against several Zizians. Before he was arrested, Snyder seems to have had little contact with Ziz other than reading her blog.
Leverage Research put a lot of work into developing a grand unified theory of psychology. What they called “Connection Theory” purported to explain how people’s goals, beliefs, and other mental processes interrelated. “Charting”— diagramming how an individual’s mental processes are connected — would allow people to understand themselves and resolve their mental problems. Charting was combined with “belief reporting”: setting a firm intention to tell the truth and then saying what you ‘really’ believe.
People would try to act in accordance with the artificially simple model of the mind created by these techniques, which could in itself cause mental health problems. For example, someone who belief-reported that they didn’t love their partner could wind up breaking up with their partner, even if in reality it just reflected transient irritation. But Leverage soon noticed that Connection Theory wasn’t a complete explanation of the mind. They began to explore alternate models, such as bodywork.
Through their explorations of bodywork and other “woo” practices, Leverage researchers began to believe that mental processes can interact and pass from mind to mind through microexpressions and other subconscious things. Leverage researchers initially used “demons” as a metaphor for these mental processes, but over time many researchers became more convinced that the occult was a real, powerful way to manipulate the subconscious. Occult beliefs pervaded many research groups. Routine tasks, such as deciding whose turn it was to pick up the groceries, required working around other people’s beliefs in demons, magic, and other paranormal phenomena. Eventually these beliefs collided with preexisting social conflict, and Leverage broke apart into factions that fought with each other internally through occult rituals.
Similarly, Brent promoted a cynical version of the world, in which humans were inherently bad, selfish, and engaged in constant deception of themselves and others. He taught that people make all their choices for hidden reasons: men, mostly to get sex; women, mostly to get resources for their children. Whenever someone outside Black Lotus made a choice, Brent would dissect it to reveal the hidden selfish motivations. Whenever someone outside Black Lotus was well-respected or popular, Brent would point out (or make up) ways that they were exploiting those weaker than them for their own benefit. This became a self-fulfilling prophecy. One interviewee identified this ideology as the worst harm of Black Lotus, more than sexual boundary violations or being coerced into taking drugs. It took a long time to rebuild their ability to trust and have faith in other people.
Jessica Taylor, an AI researcher who knew both Zizians and participants in Leverage Research, put it bluntly. “There’s this belief [among rationalists],” she said, “that society has these really bad behaviors, like developing self-improving AI, or that mainstream epistemology is really bad–not just religion, but also normal ‘trust-the-experts’ science. That can lead to the idea that we should figure it out ourselves. And what can show up is that some people aren't actually smart enough to form very good conclusions once they start thinking for themselves.”
One way that thinking for yourself goes wrong is that you realize your society is wrong about something, don’t realize that you can’t outperform it, and wind up even wronger. But another potential failure is that, knowing both that your society is wrong and that you can’t do better, you start looking for someone even more right. Paradoxically, the desire to ignore the experts can make rationalists more vulnerable to a charismatic leader.
Or, as Jessica Taylor said, “They do outsource their thinking to others, but not to the typical authorities.”
In and of itself, that dynamic is bad but not necessarily seriously so. Many effective altruists–members of a community closely linked to the rationality community–similarly defer to more experienced effective altruists. While effective altruists have widely critiqued this habit, it results only in poorly thought out viewpoints about charity evaluation, not in violent crime. But bad actors can easily take advantage of the desire to find someone to think for you. If you’re using neither the normal error-checking processes of society nor your own mind, how can you tell if someone is wrong?
A rationalist virtue is “taking ideas seriously”: when you are convinced of a belief, you notice all its implications and act on them. Another rationalist virtue is “agency”: taking actions to pursue your goals, even if the actions are unusual or break the rules. Together, these are dangerous. We know what happens if you do normal things; an enormous number of previous normal people have found that out for us..
But if you’re using your decision theory to make decisions and not just to get a PhD in mathematics, it really matters that you chose correctly. If you’re using your all-encompassing theory of human psychology to decide how to treat people, it really matters that you actually understand it. And all of these are more dangerous if, instead of following reasoning you understand, you’re deferring to the judgment of someone else who seems like they’ve thought about it a lot.
Agency and taking ideas seriously aren’t bad. Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work and only hypochondriacs worried about covid; rationalists were some of the first people to warn about the threat of artificial intelligence. If you want better outcomes than normal people, then you also need to do something different than what normal people do. But diverging from the norm is often risky and the risks have often not been taken seriously.
Beware psychology
The most interesting question I asked my interviewees was “what was your day-to-day life in the group like?”
Here is a sampling of answers from people in and close to dysfunctional groups: “We spent all our time talking about philosophy and psychology and human social dynamics, often within the group.” “Really tense ten-hour conversations about whether, when you ate the last chip, that was a signal that you were intending to let down your comrades in selfish ways in the future.” “Like somebody's going through a real bad breakup and you need to, like, hold their hair out of the toilet over text, tell them that it's going to be okay and help them put their life back together, except for years.”
Conversely, when I asked interviewees in high-demand groups that wound up basically functional, like Benton House, they gave answers like: “I learned to drive, which was really important. I got my first programming job. We played calibration games and a rationalist version of Family Feud called Schelling Point.” "I wrote LessWrong posts and started a rationalist fiction novel, helped someone immigrate, and once scrambled like thirty eggs in a wok."
One Black Lotus member wanted to emphasize to me that Black Lotus wasn’t all bad. The abuse was real. But he had been stuck in a dead-end job for years when Brent Dill looked at him and said “you’re smart, you can be in charge of build for my Burning Man camp.” Suddenly, he was putting in sixteen-hour days running a team of a dozen people, and he was good at it. He realized that he could manage people, troubleshoot problems, and build something he was proud of. He felt capable in a way he never had before.
Similarly, interviewees who knew about Leverage Research emphasized that some research groups were much more dysfunctional than others. Research groups that focused on real-world goals — like incubating the cryptocurrency Reserve — tended to be more functional. On the other hand, research groups that focused on ten-hour debugging sessions using Connection Theory were recognized even within the group as being cultier.
The rationalist organization the Center for Applied Rationality seems at first blush like an exception to this rule. It develops curricula and teaches workshops about how to think more rationally, but — while in the past it occasionally blurred the line between rationality curriculum development, people management, and therapy — it doesn’t seem to have approached the level of Leverage Research. CFAR mostly tests its newly developed rationality techniques on outside volunteers, for about an hour each. They don’t do eight-hour debugging sessions. They also strove to be empirical as best they could. If a technique seemed to make workshop participants happier and better off, it stayed; otherwise, it was removed.
Rationalist groups tend to be functional to the extent that their activities involve learning to program, writing papers for general publication, building a giant dome on the playa, or otherwise interacting with the real world. Rationalist groups tend to be dysfunctional to the extent that their activities involve very long conversations about human psychology and social dynamics, especially dynamics within the group itself. Relatedly, the clearest-cut cases of rationalists being right have involved external events in the world and not the nature of human beings.
Diving deep into human psychology isn’t an unusual pattern for the Bay Area: from Esalen to Circling, many groups outside the rationalist community become obsessed with the inner workings of the mind and how they manifest in the group dynamic. But I think this obsession tends to inflame a number of harmful dynamics. They, of course, offer a lot of opportunities for bad actors to manipulate group members, but even if everyone involved is well-intentioned they end badly. There is no conflict so small that five hours straight of emotional processing can’t make it feel both profoundly important and profoundly unsolvable. This kind of conversation leads to guilt and shame when you don’t measure up; it leads to paranoia and isolation when outsiders don’t. And without the steadying influence of some kind of external goal you either achieve or don’t achieve, your beliefs can get arbitrarily disconnected from reality — which is very dangerous if you’re going to act on them.
One interviewee said, “One kind of cult you can have is when you and ten of your closest friends all live in a house together and you have the blackout curtains drawn and a lot of MDMA, and you sit around and talk about the implications of the whatever.” The rationalist community keeps spawning groups like this. Most are nothing but a (possibly fun) waste of time. But when the conversations become fraught and obsessively inward-facing, it can spawn Leverage Research, or Black Lotus, or the Zizians.
Unlike agency or taking ideas seriously, I feel no compunctions about warning against very long, stressful conversations alternating between broader theories of human psychology and specific processing of the group members’ emotions. That kind of conversation is neither fun nor productive. If you’re having a tense conversation about feelings for ten hours, you might not be in a cult, but something has gone wrong.
Consequentialism
Several interviewees identified consequentialism as one of the riskier ideas to take seriously.
Brent Dill convinced some people that he was an extraordinary genius who would be capable of fantastic achievements, just as soon as he stopped being depressed. Therefore, from a consequentialist perspective, you should focus your effort on fixing his depression, no matter how much time or money or emotional energy it takes (and if you could throw your vagina into the bargain that would help too). The costs to you, no matter how large, are outweighed by the benefit a non-depressed Brent could bring to the world.
Several interviewees noted particular risks from overriding harms. “The issue is that ‘something, something dead babies’ justifies an awful lot,” said one interviewee. The long-term benefit that rationalists tend to be most worried about is AI. Many rationalists believe that an artificial general intelligence (AGI) will be developed very soon: for example, a mostly-rationalist team wrote the forecast AI 2027. Many of them also expect that, without heroic effort, AGI development will lead to human extinction.
These beliefs can make it difficult to care about much of anything else: what good is it to be a nurse or a notary or a novelist, if humanity is about to go extinct? Unfortunately, many people simply don’t have the skills to become AI policymakers or technical AI safety researchers. The vulnerable people particularly attracted by the Sequences are especially likely to not have enough work ethic or mental health to be able to contribute.
What do you do? You attach yourself to any project that seems like it might be able to help. The lucky ones accumulate a stack of rejections, or aimlessly drift from small grant to small grant and from badly-run nonprofit to badly-run nonprofit. The unlucky ones get sucked into a dysfunctional group.
Many people keep having the tense psychology discussions I talked about because they think AI is important and right now they can’t do anything to help. If you’re acutely aware that your ADHD or your trauma, your depression or your subclinical procrastination tendencies, are keeping you from preventing human extinction, then you’d do anything to fix it.
The overwhelming stakes of AGI can lead to a dangerous sense of grandiosity. “It’s a story in which they matter and in which it is justified for them to do weird stuff and stand up for themselves,” said an interviewee familiar with the Zizians. “Every action has great meaning, and that hooks into people in two ways. One of which is that it's empowering, and the other of which is that it's a great trigger for becoming obsessed with whether you're a bad person.”
He continued, “It makes it easy for small things to seem very big. And I think it also makes it easy for big things to seem sort of the same size as small things. When you get pulled over and then you get in a gunfight with the cops or whatever, this is the same level of treating the situation like it is anomalous or a big deal as having an argument about who washed the dishes.”
Early rationalist writing, such as the Sequences and the Harry Potter fanfiction Harry Potter and the Methods of Rationality, emphasized the lone hero, standing defiantly against an uncaring world. But the actual process of saving the world is not very glamorous. It involves filling out paperwork, making small tweaks to code, running A/B tests on Twitter posts. Most rationalists — like Mike Blume — adjust well to the normalcy of world-saving. (Today he works as a programmer and donates to AI safety and global health nonprofits, a common rationalist career trajectory.) Others want acts of heroism as grand as the threat they face.
The Zizians and researchers at Leverage Research both felt like heroes, like some of the most important people who had ever lived. Of course, these groups couldn’t conjure up a literal Dark Lord to fight. But they could imbue everything with a profound sense of meaning. All the minor details of their lives felt like they had the fate of humanity or all sentient life as the stakes. Even the guilt and martyrdom could be perversely appealing: you could know that you’re the kind of person who would sacrifice everything for your beliefs.
The project itself in each of these dysfunctional groups was vague, free-floating, and almost magical. As soon as you have to accomplish specific goals in the real world, the mundanity of everyday life comes flooding in, with its endless slog of tasks variously boring, frustrating, and annoying. But as long as you’re sitting in a room taking LSD with the blackout curtains over the windows, you can be Superman.
Beware isolation
When asked how people could tell that their project wasn’t a cult, one interviewee said, “You leave the house regularly. Or if there's an office, you leave both the office and the house regularly.”
A recurring theme in my interviews was social isolation and groupthink: speaking primarily or only to people inside the group; rejecting dissidents or outsiders as unenlightened, as ignorant, as evil. Several people remarked that the main sign that someone had become a Zizian was that they disappeared from the Internet for a few months, and the next you hear from them is a news article talking about how they’re wanted for murder.
Interviewees particularly emphasized the epistemic problems created by social isolation. People tend to check in with the people around them to see whether their actions or beliefs are reasonable. But if you want to trick someone into believing absolute nonsense, you can introduce them to four people who already believe it and keep them away from critics. Then it seems like “everyone” agrees.
Sometimes this dynamic was created deliberately. If one of his followers objected to his behavior, Brent Dill would ask a few of his other followers to talk to them about it. Even if three of them told Brent Dill he was in the wrong, the fourth might agree — and that was the one Brent sent to talk to the dissident.
Other times, the dynamic occurred more-or-less by accident. Researchers at Leverage all lived together, which seemed like a reasonable cost-saving measure. Researchers typically socialized and dated within the group. Leverage took care of many of its researchers’ needs: meals, transportation, even handling their mail. When members had mental health crises due to unpacking their trauma in debugging sessions, Leverage stepped in to make sure their needs were met. Although apparently compassionate, this pattern meant that many researchers got everything from Leverage: sense of purpose and meaning, work, therapy, friendship, romance, food, housing, and routine life maintenance. Connections to the outside world often faded. Some researchers, after they left, found themselves helpless in the wider world.
Sometimes, the broader rationalist community played into this isolation. Members of organizations that were on the outs among rationalists, including Leverage Research, have trouble talking about their work at parties. Instead of normal small talk, they find themselves having to justify or apologize for their group. Naturally, they wind up going to fewer parties — and losing contact with the broader rationalist community.
Leverage Research didn’t like firing people. When a researcher was a poor fit, or the program they’d originally joined closed, they were shuffled from group to group. That explains some of the wildly different experiences people had at Leverage: Leverage didn’t expel people who really shouldn’t have been there, either because Connection Theory was harming them or because they were hurting others.
The isolation and groupthink was reinforced by a culture of secrecy. The norm was that individual research groups kept their research secret, even from other research groups within Leverage. That meant that the more functional research groups couldn’t provide a check on the less functional groups. Without much ability to learn what other research groups were actually doing, dysfunctional research groups became paranoid about (sometimes demon-related) sabotage — further isolating themselves from any kind of outside influence.
Conclusion
One of my interviewees speculated that rationalists aren’t actually any more dysfunctional than anywhere else; we’re just more interestingly dysfunctional. Dysfunctional workplaces, rape, abuse, and even murder aren’t unusual. People are more interested in rape or murder when it has a complicated and unusual philosophical justification, but we shouldn’t confuse being more fun to talk about with being more common.
Bearing that possibility in mind, what conclusions does my project suggest?
To some extent, dysfunctional rationalist groups are a product of features of rationalist culture that rationalists would rather not give up. For better or worse, rationalists want to act on their beliefs, and they want to do things differently from the society around them. But I believe it is possible to reduce the number of dysfunctional groups and how damaging they are.
For individuals concerned about joining a dysfunctional group, I suggest:
1. If your relationship with a person or a group consists mostly of discussions about human psychology and social dynamics, that is a yellow flag. It is a red flag if:
a. The conversations are many hours long and displace other activities in your life.
b. The conversations generally focus on group members’ feelings and interactions, rather than social science or philosophy.
c. The conversations are tense, stressful, and not fun.
2. Try to do things where there is some external gauge of whether you’re succeeding at them or not. Similarly, try to test your beliefs against reality.
3. Maintain multiple social groups. Ideally, have several friends who aren’t rationalists.
4. Keep your work, housing, and therapy separate.
5. Before assuming that something is true because everyone else seems to believe it, either check the argument yourself or talk to someone outside the friend group and who wasn’t introduced to you by anyone in the group.
6. The longer and more abstract a chain of reasoning is, the less you should believe it. This is particularly true if the chain of reasoning concludes:
a. That it’s okay to do something that hurts people.
b. That something is of such overwhelming importance that nothing else matters.
7. Be cautious about joining high-demand groups if you have nowhere else to go.
My recommendations for the general rationalist community are:
If someone is in a group that is heading towards dysfunctionality, try to maintain your relationship with them; don’t attack them or make them defend the group. Let them have normal conversations with you.
Think carefully about how we can create more realistic expectations in new rationalists and whether it would be possible to offer more of them the support they need to become useful community members.
Give people realistic expectations about whether they will ever be able to get a job in AI safety. Don’t act like people are uncool or their lives are pointless if they can’t work in AI safety.
Consider talking about “ethical injunctions:” things you shouldn’t do even if you have a really good argument that you should do them. (Like murder.)