The Curious Sci-Fi Beliefs of the AI Tech Elite
Big Tech’s dystopian ideas about the future highlight the fundamental pessimism behind their billion-dollar businesses and peculiar lifestyles. Tom Midlane spoke with Émile P. Torres about the shortsightedness of their increasingly influential outlook.

Elon Musk speaks to delegates at the UK AI Summit at Bletchley Park, November 2023. (Credit: Marcel Grabowski via Flickr.)
- Interview by
- Tom Midlane
It’s a basic shibboleth of the modern Left that Big Tech is deeply dystopian. In their desperation to hoover up every last drop of our data, a handful of firms have constructed a full-blown surveillance apparatus that they continue to spend billions finessing. They have built casinoified apps that have obliterated our attention spans and incentivised the spreading of misinformation at the speed of light — even allegedly helping fuel atrocities like the genocide against the Rohingya in Myanmar.
But what kind of ideas are driving these companies? While their ceaseless thirst to accumulate capital clearly plays a central role, there’s a growing sense that the more wacky beliefs of our tech elite are slowly percolating into the mainstream. Succession creator Jesse Armstrong’s recent tech billionaire satire Mountainhead featured references to lingo like p(doom) (the likelihood of AI-induced societal collapse) and decels (short for decelerationists i.e. people who think we should slow down the pace of AI due to the overwhelming risks it presents). So what do the real life titans of tech truly believe? And what kind of future are they trying to create for all of us?
My guide to this question is philosopher Émile P. Torres, a postdoctoral student at Inamori International Center for Ethics and Excellence at Case Western Reserve University and author of the book Human Extinction: A History of the Science and Ethics of Annihilation. Dr Torres coined the term ‘TESCREAL’ (Transhumanism, Extropianism, Singulatarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism) to describe the ‘bundle’ of ‘overlapping ideologies’ that have won the hearts and minds of many of the most influential figures in the tech world by reframing them as heroic saviours of humanity.
Earlier this year Torres launched the podcast Dystopia Now with comedian Kate Willett to delve into the figures behind these belief systems. For Tribune, Tom Midlane spoke with Torres about the ironically myopic attitude of this obsession with the future, the individualism inherent to this grand philosophy, and the reason why Torres stopped believing in it.
What was the impetus behind starting your podcast? And why do you think it’s so important to talk about this set of ideas now?
To destroy the TESCREAL movement! Okay, so that’s a bit too grandiose. But the world is full of bizarre, dangerous ideas, right? It’s full of ideologies, but we don’t waste our time talking about most of them because they’re just not that influential. They don’t have that much power. But that’s not the case with the TESCREAL bundle. Look at the US election: you had Elon Musk running DOGE, a man who some described as basically the president, and then JD Vance as well, who has extensive connections with Peter Thiel, who he described as his mentor.
Okay, let’s start with the T. What is transhumanism?
Transhumanism goes back to the early twentieth century. It was basically eugenics on steroids, this idea that rather than just perfecting the human species or preventing us from degenerating, let’s just transcend the human species altogether. Then modern transhumanism emerged in the late 1980s and early 1990s.
One of the main technologies that transhumanists pointed to as potentially enabling us to become posthuman or to create posthumanity was AI. If we create an AI that is superintelligent then we could delegate it the task of re-engineering humanity, because everything for transhumanists is an engineering problem. In fact, they even refer to the creation of utopia as quote unquote ‘paradise engineering’.
The idea is that the creation of AI leading to superintelligence could be a really significant event in human history.
And that’s the singularity? The S of the TESCREAL acronym?
Right. Either this superintelligence enables us to become posthuman and then we enter into utopia as immortal beings who never experience suffering and have access to everything we want because of radical abundance, or maybe the superintelligence becomes so smart that we can’t control it. And if we can’t control it, then maybe it decides to destroy us.
Singularitarianism emerged as a cohesive doctrine in the 1990s and early 2000s. And if you fast forward to the present, those ideas that were birthed around that time are still very formative. Within the community people like OpenAI CEO Sam Altman will still talk about the singularity.
The first four letters of the TESCREAL acronym seem to have a lot in common. Tell us a bit about Extropianism and Cosmism, those are two that I suspect most of our readers won’t know much about — I certainly didn’t.
Extropianism was the first organized modern transhumanist movement. It was very libertarian, Ayn Rand’s Atlas Shrugged was on its official reading list. It was also very stubbornly pro-technology, so they were very accelerationist and they wanted to just build advanced technologies as quickly as possible. Although the movement itself has somewhat faded in prominence, the legacy of that movement — having established transhumanism within California tech culture — remains with us today.
Cosmism comes in an original form and then a modern form. The early version started in the latter nineteenth century, all about colonising space and bodily resurrection was a key aspect of it. Then in its modern form it was introduced and championed by Ben Goertzel [the CEO and founder of SingularityNET], who popularized the term ‘AGI’ for artificial general intelligence. So Cosmism in this modern sense is just transhumanism plus some other ideas. Goertzel said, why stop at reengineering humanity? Why not re-engineer the universe as a whole? Why not develop what he referred to as ‘scientific future magic’? Technologies that would enable us to intervene upon the fundamental fabric of space and time to re-engineer the universe as a whole in ways that we find to be desirable. So a key aspect of this, of course, is spreading beyond Earth, beyond our solar system, beyond our galaxy and so on. There are key aspects of that that were then taken up by longtermism.
It sounds like the philosophy of the settling of the West — Manifest Destiny.
Trump himself used that term when describing space colonization.
Can you tell us a bit about the more far-flung ideas of these tech leaders, especially the ones working on AI? It seems like a lot of them have far grander and stranger ambitions than simply making money.
I think that capitalism is a big part of the equation. People like Mark Zuckerberg, the individuals who run Google and Amazon and so on, a large part of their drive to do what they’re doing comes from their commitment to capitalist ideology. That being said, I think the tech space is really unique for exactly the reason that you were gesturing at in that there are these utopian ideologies that provide the other half of the explanatory picture. For example, if you look at why AI companies came into existence, the reason that they were founded in the first place was almost entirely because of these utopian ideologies, of which there are a number of really bizarre aspects.
Transhumanists’ ultimate goal is to explore the posthuman realm. They believe there are these modes of being, and Homo Sapiens occupies just a tiny region of this much broader territory. And so by developing what they call ‘radical enhancement technologies’, then we might be able to modify the human organism in very profound ways so that we can instantiate these posthuman modes of being.
Would that include something like Elon Musk’s brain-computer interface company Neuralink?
Exactly. So that we can think thoughts that we currently cannot think — in the same way that a dog, no matter how clever or well-trained it is, it’s never going to grasp the concept of an electron. There are also transhumanists out there who are excited by the possibility of ‘expanding our sensorium’ — the collection of different senses that we have. They literally say ‘How cool would it be if we could navigate through the world with echolocation like bats or magnetoreception like homing pigeons?’ Another key aspect is life extension, which individuals like [Palantir co-founder and chairman] Peter Thiel have expressed interest in. Then there’s someone like Bryan Johnson…
He’s the founder of Kernel, who also make devices that monitor and record brain activity.
And also a biohacker who is trying to live forever and was experimenting on himself and getting blood transfusions from his 17-year-old son. He also very carefully measures the number of nighttime erections he gets. There’s also many people in the community who have signed up with cryonics companies to have their bodies or just their heads cryogenically frozen after they die. I mean, a huge number.
Wasn’t Open AI’s CEO Sam Altman one of the people who signed up for that too?
I don’t think that Nectome, which is the company that Altman signed up for, is cryonics. I think they’re trying to develop some kind of alternative brain preservation technique so that the brain can be digitized. But it’s a similar idea.
So he thinks his brain will be uploaded to the Cloud?
Yeah, he thinks that’ll happen within his lifetime.
So many of these Big Tech leaders seem to envisage the future as entirely digital.
I think that there are really significant parallels between transhumanism, which is the backbone of the entire TESCREAL bundle, and Christianity. One of them is this general repugnance with the body. It’s like: ‘If we’re going to create utopia, we need to transcend our biology.’ And the ultimate way to transcend biology is not merely to merge technology with biology, organism with artefact, but to just entirely upload our minds to computers.
Tell us a bit about your links to the TESCREAL world. I don’t want to call you a TESCREAList but…
Oh, I would definitely say I was a TESCREAList. I was a longtermist before the word longtermism was coined. I would definitely consider myself to have been a true believer in a lot of the bizarre things we were just talking about. Although I never signed up with a cryonics company… mainly because it was too expensive! But just about everybody I knew in the community wore little dog tags [to indicate that their body needed to be frozen as quickly as possible after death in the hope they could one day be reanimated].
Didn’t you work as a research assistant for Ray Kurzweil, too?
I did, for his book The Singularity is Nearer. The title always reminded me of the sequel to Dumb and Dumber — Dumb and Dumberer.
How did you first start to become disillusioned with these beliefs? What was your exit route?
It was the end of 2019 when I first publicly criticized longtermism. I spent three or four months at the Centre for the Study of Existential Risk at the University of Cambridge. One might think that my faith in the TESCREAL worldview would have been reinforced due to that experience, but it was basically the opposite. I left going like, ‘Something’s not right here.’
Could you explain to us what longtermism actually is? I know Elon Musk described Scottish philosopher Will MacAskill’s book What We Owe the Future as ‘a close match for my philosophy.’
Longtermism comes in two varieties. The moderate version says that ensuring that the very long-term future of humanity goes very well, is optimised, is a key priority of our time. And the radical version says it’s the key priority, right? So the vision is that humanity, including our post-human descendants, could survive in the universe for an extremely long period of time and if we colonise space, there could be a huge number of future people. If your goal is to positively influence the greatest number of people possible, and if most people who could exist will exist in the far future, then you should be focused on them rather than current day people.
Can you tell me a bit about some of the more malign ways this philosophy has been instrumentalised?
This ideology has a natural appeal to billionaires. Because it tells them exactly what they want to hear: ‘You don’t need to worry about poor people around the world because there’s something that matters way, way more.’ Which is getting us into space, developing technologies that can enable us to re-engineer humanity and so on. These are things that could bring so much more value into the world than saving the 700 million people who are in extreme poverty right now. It’s just a numbers game.
It tells them exactly what they want to hear and hence it has a natural appeal to these individuals because it provides this kind of superficially plausible moral excuse for what they want to do anyway, which is not care about poor people and just focus on their pet projects of colonizing space and merging our brains with AI, like Elon Musk is trying to do with Neuralink.
There’s a line in one of the founding documents of longtermism written by Nick Beckstead in which he says that given finite resources and if you assume a longtermist perspective, then saving the lives of people in rich countries should be prioritized over saving the lives of people in poor countries. Because people in rich countries are better positioned to influence the far future than people in poor countries — and so it’s just a better use of money.
As you said, it’s just so self-reinforcing isn’t it?
It’s so convenient. I mean, white people in the Global North have always had some excuse for not helping others, right? And I think longtermism is just another example on the list of excuses for why we are justified in not caring about the plight of people in poor regions of the world. Also what longtermists mean by the term ‘future people’ is very different from what the average person and, frankly, most philosophers would interpret by that term.
For longtermists, it’s not just about ensuring that future people have better and less miserable lives, but it’s ensuring that they exist in the future. Because if you accept the utilitarian view that the more value in the universe across space and time that exists, the better the universe becomes, then you have a moral obligation to create as many new people as possible. Right now, there could exist people in vast computer simulations throughout the entire accessible universe. But those people don’t exist. Therefore, we have this moral imperative to go out and colonize space as quickly as possible and build these vast computer simulations. That is what they mean by caring about future people.
A lot of these ideas also bake in a very specific, individualistic view of politics — the only way to alleviate global poverty is to become a hedge fund manager and donate half your salary.
A lot of the people in the community are neoliberals. And then they develop this philosophy that provides a kind of moral excuse for perpetuating neoliberal policies. Many of the effective altruists and longtermists are capitalists themselves. They praise billionaires who donate a fraction of their wealth as some of the greatest philanthropists in all of history. There’s a 2021 article by Hilary Greaves and MacAskill in which they talk about AI safety and they say something like, given how important the development of AI could be, because our future could be so vast, that every $1,000 that is spent on AI safety is the equivalent of saving one trillion actual human beings. That’s part of the problem with their whole quantitative approach to value, you can get the numbers to support basically any conclusion.
Do you still think there’s anything valuable in any of these ideologies?
I think one good thing that came out of longtermism getting a bit of visibility back in 2022, after MacAskill published his book What We Owe the Future, is that maybe they shifted the Overton window a little bit, towards the idea that maybe it’s okay and even good to be concerned with the longer term future of humanity. That said, what I think they get wrong is much more profound than what they get right.
In the original paper you and Dr Timnit Gebru wrote, you talk a lot about how these ideas are grounded in eugenics.
I think the role of eugenics within the TESCREAL movement is really significant. Transhumanism is classified as a type of so-called ‘liberal eugenics,’ and all of the other ideologies and corresponding movements of the TESCREAL bundle emerged out of the transhumanist movement.
Pretty much everybody in this community are ‘IQ realists’. There are leading figures within the movement who approvingly cite Charles Murray and Richard Herdstein, the authors of The Bell Curve. Then there’s people like Nick Bostrom, who introduced this idea of existential risk — any event that prevents us from creating this spacefaring utopian civilization. In one paper, he lists a bunch of different existential risk scenarios and ‘dysgenic pressures’ is one of them. He literally says if less intellectually capable, quote unquote, people outbreed their more intellectually capable peers then there’s going to be this dysgenic scenario where the average IQ — a term that I absolutely hate — but the average IQ of humanity will drop.
He thinks that could be an existential risk because we need high intelligence to create the very technologies that are going to take us to utopia. He even says, if you look around the world you’ll find that the regions of our planet with the highest fertility rates are also marked by the lowest levels of intellectual achievement. It doesn’t take too much squinting to see what he’s talking about there.
How widespread do you think TESCREAL ideas really are within the tech world?
One of the goals of the article that I co-authored with [former Google researcher] Dr Timnit Gebru was to convince readers that this isn’t just a fringe group. I mean maybe relative to society as a whole it’s pretty fringe. But within Silicon Valley it’s hugely influential. It is embraced by Sam Altman, Peter Thiel, Jaan Tallinn, and Elon Musk. What is relevant as far as I’m concerned is not the size in absolute numbers of the community, but the political, economic and social power that they wield.
The TESCREAL ideologies aren’t just a worldview that will influence the world in the future, they constitute a worldview that has already profoundly impacted our lives. Because all of the major AI companies, with maybe the exception of Meta — but Mark Zuckerberg’s also a transhumanist — OpenAI, DeepMind, Anthropic, xAI, all of them emerge directly out of the TESCREAL movement. Their original impetus for forming was to build artificial general intelligence, which is a key aspect of fulfilling the TESCREAL project. The systems that we have now, the large language models like ChatGPT and Gemini and so on, are seen as stepping stones along the path to AGI.
You were once a passionate TESCREAList and then lost your faith in these ideas. Is there any part of you that misses these beliefs?
When I was leaving the community, I thought that my answer in the future to that question would be yes — but it’s not. I thought that I would think it’s a tragedy that I no longer have the same sense of purpose in life, the sense of meaning, the hope for the longer term future of humanity. Also the promise of immortality, the possibility I could live forever. But I really don’t.
I feel like having left the transhumanist and longtermist communities, I have been able to develop and foster a new appreciation and connectedness with life on this planet. I mean, Mars is a toxic wasteland. It is a horrific place. There is no planet B. You go to Mars, you step outside and open your mouth, your saliva will boil off your tongue. Earth is incredible.
What transhumanists are really all about is how shitty being human is. You know, ‘It sucks to have this meat sack. We’re so dumb. We can only move so fast. We can only process information so quickly. We only live so long. It’s just an awful situation. We need to transcend that through technology and then we’ll get to something that’s really worthwhile.’ I just reject that. I like the way things are. I want to preserve this. I care about this.