On April 16, 2024, the website of the Future of Humanity Institute was replaced by a simple landing page and a four-paragraph statement. The institute had closed down, the statement explained, after 19 years. It briefly sketched the institute’s history, appraised its record, and referred to “increasing administrative headwinds” blowing from the University of Oxford’s Faculty of Philosophy, in which it had been housed.
Thus died one of the quirkiest and most ambitious academic institutes in the world. FHI’s mission had been to study humanity’s big-picture questions: our direst perils, our range of potential destinies, our unknown unknowns. Its researchers were among the first to usher concepts such as superintelligent AI into academic journals and bestseller lists alike, and to speak about them before such groups as the United Nations.
To its many fans, the closure of FHI was startling. This group of polymaths and eccentrics, led by the visionary philosopher Nick Bostrom, had seeded entire fields of study, alerted the world to grave dangers, and made academia’s boldest attempts to see into the far future. But not everyone agreed with its prognostications. And, among insiders, the institute was regarded as needlessly difficult to deal with — perhaps to its own ruin. In fact, the one thing FHI had not foreseen, its detractors quipped, was its own demise.
Why would the university shutter such an influential institute? And, to invert the organization’s forward-looking mission: how should we — and our descendants — look back on FHI? For an institute that set out to find answers, FHI left the curious with a lot of questions.
***
In 1989, a seventeen-year-old Swede named Niklas Boström (he would later anglicize it Nick Bostrom) borrowed a library book of 19th-century German philosophy, took it to a favorite forest clearing, and experienced what The New Yorker would later describe as “a euphoric insight into the possibilities of learning and achievement.” Damascene moments aren’t generally how people decide to become academics, but from this day forward, Bostrom dedicated his life to intensive study. He retreated from school in order to take his exams at home, and he read widely and manically. At the University of Gothenburg, he received a BA in philosophy, mathematics, mathematical logic, and artificial intelligence. After that, he pursued postgraduate degrees in philosophy, physics, and computational neuroscience. In what little spare time he had, Bostrom emailed and met up with fellow transhumanists: people enthusiastic about radically improving human biology and lifespans.
As early as 2001, he was studying little-known phenomena called “existential risks,” writing that nanotechnology and machine intelligence could one day interfere with our species’ ascent to a transhuman future. He also, in 2001, formulated the “simulation hypothesis,” advancing in Philosophical Quarterly the theory that we might be living in a computer simulation run by humanity’s hyper-intelligent descendants.
By this point, Bostrom had arrived at Oxford as a postdoctoral fellow at the Faculty of Philosophy.
Some years later, the faculty would become his bête noire. But it was Bostrom’s membership in it that enabled a stroke of luck that would change his life. At some point in the early aughts, Bostrom had met James Martin, an IT entrepreneur. Martin had become a prescient futurist, and in 2006 produced a documentary featuring Bostrom. But Martin was also becoming a deep-pocketed philanthropist. Through Julian Savulescu, another young philosopher interested in human enhancement, Bostrom learnt that Martin was planning to fund future-minded research at Oxford. Hoping that this could encompass work on his interests, Bostrom made his case to the university’s development office.
Twenty years later, the details are hazy. FHI lore has it that, at one of dinners hosted by Oxford for its biggest donors, Bostrom was seated next to Martin, creating the perfect conditions for what we now call a nerdsnipe. Some time later, in 2005, Martin made what was then the biggest benefaction to the University of Oxford in its nine-century history, totaling over £70 million. A small portion of it funded what Bostrom decided to call the Future of Humanity Institute. “It was a little space,” Bostrom told me, “where one could focus full-time on these big-picture questions.”
That seed grant was enough to fund a few people for three years. Because his team would be small, and because it had such an unconventional brief, Bostrom needed to find multidisciplinarians. He was looking, he told me, for “brainpower especially, and then also a willingness and ability to work in areas where there is not yet a very clear methodology or a clear paradigm.” And it would help to be a polymath.
One of his earliest hires was Anders Sandberg. As well as being a fellow Swede, Sandberg was a fellow member of the Extropians, an online transhumanist community that Bostrom had joined in the Nineties. Where Bostrom is generally ultra-serious, Sandberg is ebullient and whimsical. (He once authored a paper in which he outlined what would happen if the Earth turned into a giant pile of blueberries). But the two men’s differences in personality belied their similarity in outlook. Sandberg, too, was an unorthodox thinker interested in transhumanism and artificial intelligence. (Sandberg was particularly interested in the theoretical practice of whole-brain emulation, i.e. the uploading of a human mind to a digital substrate.)
Sandberg was interviewed in the Faculty of Philosophy’s Ryle room, named for the philosopher Gilbert Ryle.
He explained some neuroscience to the faculty staff who were assessing him and communicated his aptitude in another little-known area of human endeavor: web design. He was hired, and he returned in January 2006 to take up a desk at FHI and a “silly little room in Derek’s house.”
Sandberg was lodging, with Bostrom, in the home of Derek Parfit, a wild-haired recluse who was also one of the most influential moral philosophers of the modern era. Bostrom had the master bedroom and collected rent from the rotating cast of lodgers.
Parfit, Sandberg recalled, slept in “a little cubby hole” of a bedroom, and would scuttle at odd hours between it and his office at All Souls, the highly selective graduate college seen as elite even relative to the rest of Oxford.
Including Sandberg, Bostrom hired three researchers, and began to sculpt a research agenda that, in these early years, was primarily concerned with the ethics of human enhancement. An EU-funded project on cognitive enhancement was one of FHI’s main focuses in this period. The institute also organized a workshop that resulted in Sandberg and Bostrom’s influential roadmap for making whole-brain emulation feasible.
At the same time, FHI staff were beginning to publish work on the gravest perils facing humanity, a topic that was not yet an established academic discipline. An FHI workshop brought together hitherto disparate thinkers such as Eliezer Yudkowsky, who went on to become one of the most prominent theorists concerned by superintelligent AI. Bostrom co-edited the 2008 book Global Catastrophic Risk, a collection of essays on threats such as asteroid impacts, nuclear war, and advanced nanotechnology.
But it wasn’t exactly a plush gig. Those who worked for FHI had to accept temporary contracts and an ugly office building, Littlegate House, that bore scant resemblance to the beautiful quadrangles that were its near-neighbors. (Jaan Tallinn, an occasional visitor, is said to have joked that the office’s windowlessness was designed to reduce the processing power needed for humanity’s descendents to simulate FHI.) In return for his staff’s forbearance, Bostrom tried to lay over them a carapace that would shield them from many of the more draining quotidian demands of academic life. There were no requirements to teach, and, as time went by, decreasing pressure to publish via traditional academic modes. Explaining this ethos to me, Bostrom compared his staff to gems so brilliant that a jeweler would want to create bespoke casting for them. He wanted “to pick these jewels, and then create a kind of organizational fixture around them that would let them scintillate and do their thing with the smallest possible number of distractions.”
Outside the carapace, the world was changing. It was the early 2010s, and the then–current “AI winter,” as the field’s periods of moribundity are known, was easing. FHI staff, Bostrom in particular, began to spend more time considering the risks and opportunities that might result from the era’s advances in technology. Bostrom began writing a book of his own on catastrophic risks, and the work on human enhancement was largely wound down. “This was a fairly typical approach for FHI,” Sandberg wrote in his retrospective on FHI. The institute’s modus operandi was to find a neglected topic deserving of research before “germinating it in the sheltered FHI greenhouse, showing that progress could be made; coalescing a field and setting research directions; attracting bright minds to it; and once it’s established enough, setting it free, and moving onto the next seedlings.”
AI risk and governance in particular were the topics that most quickly branched out — and with them FHI itself. Its senior staff now included not only Sandberg, but Toby Ord, a computer scientist turned philosopher. Ord was one of the key figures in the founding of Effective Altruism, the movement that attempts to quantify altruistic efforts and to maximize them. He had pledged to give away everything he earned over a modest allowance, and co-founded a charity, Giving What We Can, whose members pledge to give 10% of their earnings to charities they deem to be effective. Another senior researcher was Eric Drexler, an engineer and futurist who has been called in Wired “the undisputed godfather of nanotechnology.”
Together with Sandberg, they helped set the institute’s tone: eclectic, curious, and unabashedly vigorous in their pursuit of questions other academics would not touch. Yet even in this talented company, Bostrom was deeply respected. “From the outside,” said a former colleague, “I wouldn’t have been able to see the difference between Nick and the other researchers. It’s only when you watch them in discussion that you see it. Oh my God … the long tail of intelligence really is long.”
Early in the 2010s, FHI had been moved downstairs to a bigger set of rooms within Littlegate House. The original premises had whiteboards, but Sandberg had insisted on more, and so the new office had a central room encircled by them. It was nicknamed the “whiteboard panopticon.” Here, FHI staff would sketch out ideas, from potential solutions to AI safety (one researcher was briefly overjoyed when he mistakenly believed he had solved the problem); to a prediction of what interstellar war would involve; to a consideration of what music would be like in a world with more than one time dimension. On another whiteboard, Bostrom maintained a scorecard for FHI. Since Bostrom often worked at night, the number would be updated when nobody else was around. Over time, the number crept upward, though it would fall when there were “setbacks.” Bostrom, who can call upon a dry wit when he chooses to,
told me that the precise workings of the metric were “shrouded in mystery.”
One imagines that the number frequently moved upwards from 2014 onwards. A crucial factor in FHI’s ascent to prominence was its earlier decision to focus more intellectual energy into AI risk. One chapter in particular from Bostrom’s book on catastrophic risks had taken on a mind of its own, so to speak, becoming the sole focus of the project. The resulting book, the bestselling Superintelligence, warned humanity that the creation of superintelligent AI is likely to be our final act — for good or ill. Thanks in part to Bostrom’s knack for storytelling — this is also the man who gave us, in 2003, the fable of the paperclip maximizer — the book was a wild success. Elon Musk, who would soon become a donor to FHI, publicly praised it, as did Bill Gates. (Musk’s donation was the first major funding for work on AI safety, though his enthusiasm would later become a reputational hazard for FHI.)
Parfit was said to have received the book as a “work of importance,” and Sam Altman, then a 29-year-old who had just been put in charge of Y-Combinator, wrote that Superintelligence was “the best thing I’ve seen” on AI risk. In October 2015, Bostrom briefed a United Nations committee on the dangers posed by future technologies.
Littlegate House now felt like one of the brightest intellectual scenes in the world. The work on AI was more than talk: FHI researchers were among the first people anywhere to do empirical, rather than just theoretical, work on the problem of AI alignment. After a period at FHI, Jan Leike helped create the method we now know as reinforcement learning from human feedback, which today undergirds every major large language model. (Formerly head of alignment at OpenAI, Leike is now at Anthropic.) With two research scholars, the alignment specialist Owain Evans helped create an important benchmark of AI truthfulness that is still used by major developers. And Katja Grace, with Evans and others, began the project that became AI Impacts. (AI Impacts gathers and synthesizes experts’ views on what we can expect from AI development, providing useful data for decision-makers.)
By the mid-2010s, FHI was attracting visits not only from technology heavyweights such as Demis Hassibis (co-founder of DeepMind) and Vitalik Buterin (inventor of Ether), but also from the mainstream media. Bostrom was garlanded by a New Yorker profile by Raffi Khatchadourian, who found the FHI office to be “part physics lab, part college dorm room,” noting the posters of the film Brave New World and of the computer, HAL 9000, that goes rogue in 2001: A Space Odyssey. There were split keyboards, homemade keyboards, Dvorak keyboards. Embedded in furniture were loose Nerf gun pellets, the remnant of a day in which the young daughter of Stuart Armstrong, an AI safety specialist, had gone hunting for her father’s fellow researchers.
At FHI, Bostrom’s ludic side was less visible. Arriving at FHI in the afternoon, Bostrom would work into the small hours. But first he could be spotted in the kitchen, where he would put together the vegetable-based smoothie that he wryly called his “elixir.”
This was often the only time that staff would bother him before he disappeared into his office. Knowing that Bostrom liked to descend deep into the halls of concentration, staff would seldom disturb him. Tanya Singh, whose five-year stint at FHI encompassed several senior operations roles, as well as periods of being Bostrom’s executive assistant, said she knocked on his door only seven or eight times.
On the rare occasions that she entered his brightly-lit room,
she would find Bostrom sitting and thinking in near-perfect stillness. “There was a palpable intensity in that stillness,” she said. “I have never seen anything like it. You could drop something next to him — a bomb could go off — and he wouldn’t move, he wouldn’t register it at all.” Bostrom was as protective of his own freedom to sit and think undisturbed as he was of his staff’s. It was a carapace within a carapace. By all accounts, he spent little time maintaining relations with the Philosophy Faculty.
For a while, this didn’t seem to matter. FHI’s work was becoming even more relevant to the outside world, even if it wasn’t much appreciated within Oxford. In March 2020, Ord published The Precipice, a book that examined for a popular audience the existential threats facing humanity. The book examined risks such as climate change, AI, and another area of increasing interest to FHI: man-made pandemics. Soon after FHI’s biological threats team had begun drawing attention to an epidemic in Wuhan, much of the world was plunged into the first COVID lockdowns. It was a vindication of a sort, if a grim one.
Philanthropic funders admired this and other lines of work on catastrophic risks. Thanks to these funders’ munificence, the institute was expanding. Its administrative duties were taken on by bright young minds who could otherwise have been earning vast corporate salaries or taking on high-status research work at other institutes.
Its Research Scholars Programme, led by the mathematician Owen Cotton-Barratt, was a revolving door of young talent. Other entry points were the DPhil Scholarship and the summer program for undergraduates. “Desks were crammed into every conceivable space with increasing ingenuity,” wrote Sandberg. Everywhere one turned there was a danger of getting nerd sniped — and that held true for the general public too. Multiple staff were invited to present their work to the British parliament; Toby Ord would soon be quoted in the address that Boris Johnson, then the British prime minister, made in 2021 to the UN General Assembly.
But the writing, appropriately enough, was already on the wall. In late 2020, and to the shock of FHI, the university froze its ability to hire. No new staff, no new research scholars. This was one of several measures, including a fundraising freeze, that every former FHI staff member to whom I spoke believed were designed to throttle their work.
If this was bureaucratic animus, as FHI believed it to be, where did it come from? FHI staff, for their part, often expressed exasperation at the delays and paperwork that university membership entailed. In their frustration they developed a unit, “the Oxford,” as a shorthand for the amount of work it takes to read and write 308 emails — the actual administrative effort it took for FHI to have a small grant disbursed into its account.
Bostrom wanted to hire people quickly, work with industry and non-profits, and host conferences without having to parlay any of this through the university’s bureaucratic machinery. But the Faculty of Philosophy, by Bostrom’s description, had “a very different cultural mindset.” Its attitude to hiring, as Bostrom saw it, was rooted in a culture of teaching the same sort of philosophy — Aristotle, Plato et al — for centuries. “‘We have this person who should teach ancient philosophy,’” he said, approximating the faculty view, “‘and then when they retire, 40 years from now, we’ll hire another person to teach ancient philosophy’... whereas our research agenda was very much designed to be flexible.”
To those who take Bostrom’s “astronomical waste” argument seriously, inefficiency could be conceived of as a profound wrong. (In the paper of the same name, Bostrom argues that the future could contain such profound amounts of happiness that any delay constitutes a loss of value — wastage on an astronomical scale, in other words — that “boggles the mind”.) The university, however, did not share that sense of urgency. To them, FHI was less bureaucratically sinned against than it was a bureaucratic sinner. FHI seems to have had a reputation of being difficult to deal with, and — depending on whom you ask — of having management that thought itself above the petty demands of university bureaucracy. “During my time,” a former FHI-er, Seán Ó hÉigeartaigh, wrote on the EA Forum, “FHI constantly incurred heavy costs for being uncooperative.” Its misdeeds, though minor, irritated the university. Staff were reproved for using Gmail instead of Outlook, for traveling without risk assessments, and so on. I was told of a representative incident in which FHI bought a SIM card for a guest, hoping to make the guest’s stay in England more straightforward, but failed to prevent it from being used to make additional purchases. The faculty found itself picking up the bill, and took a dim view of FHI’s carelessness. The pairing of the institutions was an increasingly unwieldy one: FHI had become larger than the body that housed it, attracted more attention and funding, and employed many more non-philosophers than it did philosophers.
In all, faculty did not seem particularly enamored of the institute. “The impression I got,” reported a don from a different department, “was that the philosophers” — i.e. those within the faculty — “didn’t have much regard for it.”
Fundamentally, it seems, there was a mismatch between the way the organizations assessed the value of their work. “The philosophy faculty’s currency is peer-reviewed papers in prestigious journals that get cited a lot,” said Niel Bowerman, assistant director of FHI from 2015 to 2017. “That wasn’t the currency of FHI. The currency was cool ideas that could improve the world.” The mismatch only increased as FHI became more high-profile and started to attract funding with fewer obligations.
Multiple FHI staff told me that relations worsened when a new faculty chair, Chris Timpson, arrived in 2018. (I asked Timpson for an interview, but he chose not to comment). When I put a detailed set of questions to the university, I was issued with the same vague statement it had released on FHI’s closure.
Bostrom did not want to comment on individuals but told me that he wishes he had “pulled the plug” on FHI in 2019 or 2020, when there were “more rules, dictated limitations, new procedures, everything instituted to throw sand into the gears.”
The eternal bureaucratic logjam was, in the view of FHI staff, having real consequences — and not just in the faraway astronomical waste sense. Jan Kulveit, who worked at FHI between 2018 and 2023, had led a COVID-19 forecasting project in the virus’ first wave. He wanted to expand the project and provide medium-range forecasts for the whole world, warning about the potential second wave. The project was offered philanthropic funding, but it turned out it would be too bureaucratically difficult for the university to accept the grant, not least because it would have been used to hire external software engineers. The project expansion didn’t happen.
The situation continued to worsen. FHI asked its biggest donor, Open Philanthropy, to put it to the university’s vice-chancellor that FHI have more autonomy. This, FHI staff told me, did not go down well with Oxford. Within FHI, there was plenty of frustration with the university, but there was frustration with Bostrom, too. In August 2021, Owen Cotton-Barratt, the architect of the Research Scholars Program, quit FHI. In his resignation letter, addressed to Bostrom and shared with FHI staff and some allies of the institute, Cotton-Barratt praised Bostrom’s intellectual leadership, but criticized his management.
On my request, Cotton-Barratt showed me the letter. Its tone was gentle and conciliatory, but its substance was serious. Bostrom was a bad delegator, Cotton-Barratt wrote, and disinclined to invest in communication with staff. By Cotton-Barratt’s account, Bostrom prioritized his own research over FHI’s relationship with the university, and the institute had suffered as a result. To the dismay of FHI staff, the letter eventually reached the philosophy faculty. (This was not Cotton-Barratt’s doing.) Reflecting on the letter, Cotton-Barratt told me in 2024: “I think Nick did a great job building FHI, and the world has lost somewhere special. I wrote that letter in the hope that it might help FHI to iterate towards the best version of itself.” Bostrom viewed the letter as well-meaning, but “a bit of a facepalm.” It arose, he said, from an effective altruist culture of extreme openness, but it “set us back administratively by about a year.”
After the worst of the pandemic, FHI staff re-assembled at new premises. The institute’s home was now Trajan House, an office on the outskirts of Oxford. FHI was to share the building with the Centre for Effective Altruism and similarly-minded non-profits. There were Huel shakes in the fridge and a nap room that university employees, such as FHI staff, were banned from using. A former CEA employee told me that the university and the Faculty of Philosophy had rarely shown much interest, or pride, in Effective Altruism, despite it being “one of the biggest success stories in applied philosophy in maybe 100 years.” In the view of this employee, the university had “a weird beef that seemed to be motivated by personal grudges.”
FHI’s relocation to EA HQ might not have been helpful for its relations with the university, but it made intellectual sense. When Ord had helped found what we now know as EA, its concern had been poverty in the developing world. As technology such as AI advanced, EA had, like Ord, embraced the idea that the key moral priority of our time is the protection and improvement of the long-run future. This school of thought, longtermism, is indebted to the work of Parfit, the philosopher and economist John Broome, and Bostrom.
Close to its heart is the endeavor to reduce existential risk.
There was no more whiteboard panopticon, alas, and the new office rooms, stacked for months with unopened cardboard boxes, weren’t much more spacious than the last ones. Senior staff were seen on-site less often than they used to be, and the hiring freeze, thawed every now and then on a negotiated basis, still constrained FHI to a painful degree. Operations staff worked Herculean shifts. Singh sometimes worked 22-hour days. She left in June 2022, the institute’s headcount having fallen to the same number as it had been in 2017, when she had arrived. But the ascent of Sam Bankman-Fried, an EA crypto mogul who had suddenly become the world’s youngest billionaire, seemed an exceptionally promising development.
For a single, delirious summer, it seemed that every other person at Trajan House, including FHI staff, was to-ing and fro-ing between Oxford and the Bahamas, where Sam Bankman-Fried was planning a barrage of longtermist philanthropy. “The mood was pretty buoyant,” recalled Lewis Hammond, an AI researcher who had joined FHI in 2019. Ord, apparently, was one of the few who evinced reluctance to get involved. In October 2022, however, Bankman-Fried’s empire disintegrated. Millions of people had been swindled, and Bankman-Fried’s deceit had tarred his allies: EA, FHI, and the many individuals associated with them. By now, FHI’s seminars and salons were tailing off. When the Stakhanovite Singh departed, the faculty had seconded a part-time administrator who covered a fraction of her hours. “It felt like FHI was dying a slow death,” said Hammond. Or, as Singh had it: “death by a thousand paper cuts.”
And the death became a painful one. On January 9, 2023, Sandberg posted to Twitter a document written by Bostrom: “Apology for an Old Email.” A longtermist-turned-critic, Émile P. Torres, had found the old email. Bostrom, informed that the email’s publication was imminent, published it himself, with an apology, and disseminated the document via Sandberg (Bostrom does not use X/Twitter).
The email was sent in 1996, to the Extropians listserv. In a conversation about offensive communication styles, a 23-year-old Bostrom wrote that he had always liked “the uncompromisingly objective way of speaking.” The more counterintuitive and repugnant a formulation, he wrote, the more it appealed to him, assuming it was correct. As an example, he wrote that “Blacks are more stupid than whites,” commenting: “I like that sentence and think it is true.” This did not mean, Bostrom remarked, that he disliked black people and thought it was right that they be treated badly. He nevertheless went on to explain why he believed the statement to be true. Yet the sentence he’d posited, he wrote, would be taken by most people as “I hate those bloody [n-word redacted]!!!!” By Bostrom’s account, he had apologized for the email within 24 hours of sending it.
In his apology, Bostrom wrote: “I completely repudiate this disgusting email.” Explaining his views, he said that “it is deeply unfair that unequal access to education, nutrients, and basic healthcare leads to inequality in social outcomes, including sometimes disparities in skills and cognitive capacity.” Whether there are genetic or epigenetic contributors to differences between groups in cognitive abilities, he wrote, was not his area of expertise. Torres, and other critics, took the email and apology as evidence for their case that longtermism was motivated by eugenics.
The university suspended Bostrom while it and the faculty investigated him. Some FHI staff, along with outsiders, disapproved of the apology. It was “insensitive,” an FHI alumnus told me, and the controversy as a whole — both the original email and the debate over the apology — had further damaged FHI’s reputation. Bostrom, meanwhile, was forbidden to contact his colleagues, or do anything FHI-related, for the duration of his suspension. By now, the existence of FHI was plainly fragile. Bostrom’s staff wanted clarity from the university, but the university, I was told, refused to speak to them. This is because it was a matter for the director — whom the university had exiled. “And contrary to popular conception,” says a former member of FHI staff, “Nick is an extreme stickler for rules.”
The outcome of the investigation, though, was in Bostrom’s favor. “We do not consider you to be a racist or that you hold racist views,” a representative of the university told Bostrom in August 2023, “and we consider that the apology you posted in January 2023 was sincere.” He returned to official duties.
Throughout this era, FHI’s senior staff tried to find a way of rescuing the institute. They discussed the idea of Bostrom retaining directorship of the institute’s research while handing over management to a CEO. They mooted becoming part of one of the university’s constituent colleges or spinning out of the university altogether. They were even in discussions with the Faculty of Physics over an interdepartmental transfer. Yet none of these plans became reality. Staff valued their association with Oxford, making them reluctant to leave the university. As for the interdepartmental transfer, FHI staff suspected that the plan was sabotaged by the Faculty of Philosophy.
“We were in this weird limbo for ages,” Hammond said. But on one day in March 2024, while in Trajan House, Hammond was interrupted at his desk by a university IT worker. “FHI is closing down,” he was told. “You’re going to need to give that computer back.” The university was finally swinging the ax. On April 16, 2024, FHI was shuttered.
The stated reason, Bostrom reported, was that the university did not have the operational bandwidth to manage FHI. He told me that he asked whether the email incident had anything to do with the decision, and that the university said it had not. Oxford has not contradicted this account.
In the main, said Hammond, he looks back on his time at FHI with gratitude. Its closure, though, is a memory he recalls less fondly. “I wasn’t even surprised that I was hearing it from the IT guy,” said Hammond. “It felt very emblematic of how things were being communicated and organized at that point.” He and his colleagues left the building, and the university staff forbade the other residents of Trajan House from using the empty room. Bostrom, meanwhile, was abroad, working on a new book in an Alpine chalet. FHI’s final demise, he later told me, came as a relief. “I’d imagined the faculty would, as it were, let us bleed out,” he said. After digesting the news, Bostrom returned to work.
Back in Oxford, Toby Ord organized a pub trip that staff now refer to as the FHI’s “wake.” The remaining staff, and some friends of the institute, met at the Holly Bush to mark the end of FHI’s nearly 19-year-life. Ord delivered the eulogy. FHI’s death had been “overdetermined,” he said, but it had far outlasted the three-year period for which it had been initially funded. The institute had achieved a lot, and its offspring were alive and well. Maybe there wouldn’t be anything that would replace the particular kind of organization that FHI was, Ord concluded, but maybe that was okay. After the wake, he updated FHI’s website with the statement announcing its closure.
***
FHI had died, but it left many children. The Centre for the Governance of AI, for instance, was spun directly out of FHI. There are now several organizations exploring existential risk, most of which were inspired, to some degree, by Bostrom and his work. FHI helped shape government biosecurity policies; it did foundational work on AI safety; it produced a generation of researchers and leaders who now work at the leading commercial labs, in think-tanks, and in related government agencies. Towards the end, FHI seeded work on the nature and rights of digital minds.
And the institute was wildly philosophically fecund. In his own retrospective, Fin Moorhouse, who worked as a Research Scholar at FHI, enumerates some of the insights and concepts for which institute’s staff were at least partially responsible: information hazards, existential hope, the unilateralist’s curse, an audacious attempt to dissolve the Fermi paradox, and many others. Appropriately for an institute founded by a transhumanist, FHI is living many afterlives.
What of its staff? Oxford’s “administrative headwinds,” having blown down FHI, have also dispersed the band of researchers that made FHI famous. Stuart Armstrong is the co-founder and chief mathematician of a start-up that is trying to develop fundamentally safe artificial intelligence. Eric Drexler continues to work on AI governance and strategy. Ord is working on AI governance at the Oxford Martin School — another of James Martin’s philanthropic children. Anders Sandberg is soon to publish Grand Futures, a reputedly gigantic tome in which he will map the physical limits of what advanced civilizations can achieve,
and has joined the newly-founded Mimir Center for Long Term Futures Research.
When I spoke to Bostrom in 2024, he was midway through the publicity campaign for his own new book, Deep Utopia. In the book, Bostrom considers a world in which the development of superintelligent AI has gone well. Some observers, he told me, have assumed that this means he feels a greater bullishness about humanity’s prospects of surviving and thriving. Alas. “We can see the thing with more clarity now,” said Bostrom, “but there has been no fundamental shift in my thinking.” When he wrote Superintelligence, he said, there seemed an urgent need to explore the risks of advanced AI and to catalyze work that might address those risks. “There seemed less urgency to develop a very granular picture of what the upside could be. And now it seems like time to maybe fill in that other part of the map a bit more.”
Don’t expect to see Bostrom on the golf course. There is no prospect that he will leave his field of study. But Deep Utopia might well be the last of his books; they take years to write, and Bostrom views shorter projects as more appropriate for our current point in history. AGI is coming, and time is short.