Highway to Hell: The Dystopian Fantasies of Tech Billionaires

In this episode, we chat with philosopher and historian Émile P. Torres about the dystopian fantasies of ecologically blind tech billionaires – transhumanists, longtermists, and effective altruists – of defying nature, transcending humanity, and colonizing the universe. Highlights of our conversation include:

  • how transhumanism is built on the idea of creating God-like AI to reengineer humanity to achieve immortality, sustain capitalist growth, and colonize space;

  • how the effective altruism’s utilitarian approach to philanthropy is not only blind to systemic change, social inequalities, and moral integrity, but it also perpetuates neoliberal ideology that further contributes to inequality and exploitation, e.g. Sam Bankman-Fried’s ‘Earn to Give’ fraud;

  • the intersection of capitalism, longtermism, and the proliferation of global catastrophic risks in the age of AI, and the ethical implications of unchecked technological progress and environmental destruction;

  • the stark differences between the indigenous long-term-thinking approach and the impoverished and hubristic longtermism philosophy put forward by William MacAskill and Nick Bostrom, which prioritizes the existence of trillions of future disembodied people living in computer simulations over the suffering of current day people and nonhumans;

  • their dangerous rhetoric around ‘depopulation panic’ based on the underestimation of environmental destruction and anthropocentrism, and how some people like Malcolm and Simone Collins are repopulating humanity with their own seemingly superior genetic material.

MENTIONED IN THIS EPISODE:

  • Émile P. Torres 0:00

    Transhumanism was an ideology developed in the early 20th century explicitly as a replacement for traditional religion. Christianity started to decline. It was right around that time that you get a bunch of ideologies that offered basically the same promises that Christianity did, but are secular in nature. Transhumanism basically offers the possibility of heaven on Earth by using science and technology to reengineer humanity. One way of understanding transhumanism is not just as a utopian possibility, not just as a solution to the problems that we now confront as a result of technology, but also as a means for capitalism to continue its growthist march. We're pushing up against all these limits of resources. Well, there's another resource that hasn't really been tapped, and that's the human organism. So if you want to keep the engines of economic growth and productivity roaring, one way is to reengineer humanity.

    Alan Ware 0:58

    In this episode of the Overpopulation Podcast, we'll be talking with philosopher and historian Dr. Emil Torres, about some of the more twisted visions of ecologically blind tech billionaires and their dreams of defying nature, transcending humanity, and colonizing the universe.

    Nandita Bajaj 1:24

    Welcome to the Overpopulation Podcast where we tirelessly make ecological overshoot and overpopulation common knowledge. That's the first step in right-sizing the scale of our human footprint so that it is in balance with life on Earth, enabling all species to thrive. I'm Nandita Bajaj, co-host of the podcast and executive director of Population Balance.

    Alan Ware 1:47

    I'm Alan Ware, co-host of the podcast and researcher with Population Balance, the first and only nonprofit organization globally that draws the connections between pronatalism, human supremacy, and ecological overshoot, and offers solutions to address their combined impacts on the people, planet, and animals. And now on to today's guest. Emile P. Torres is a philosopher and historian whose work focuses on existential threats to civilization and humanity. They have published on a wide range of topics including machine super intelligence, emerging technologies and religious eschatology, as well as the history and ethics of human extinction. Emile's work has appeared in various academic journals such as Futures and Bioethics, and in popular media outlets such as the Washington Post, Big Think, Current Affairs, and many others. Their most recent book is Human Extinction: A History of the Science and Ethics of Annihilation.

    Nandita Bajaj 2:48

    Well, Emile, we have been following your work with great interest for a while and are so impressed with the breadth and depth of both your knowledge and your lived experience in the fields of human existential risk, long termism, effective altruism, and so many other important philosophical issues of our time. We have listened to hours of your interviews and lectures and are excited that we get to chat with you in real time today. Thank you so much for taking the time to join us.

    Émile P. Torres 3:19

    Thanks so much for having me. It's wonderful to be here.

    Nandita Bajaj 3:22

    Amazing. And though your philosophical study is so wide-ranging, given the limited time that we have with you today, we'd like to focus today on your analysis of the effective altruism and longtermism movements. Effective altruism, as you've noted, comes with a large degree of utilitarianism, which is an ethical theory that posits the best action is the one that maximizes utility, and is often simplified as that action that produces the greatest wellbeing for the greatest number of people. And that sounds reasonable in theory. But as we'll discuss today, over the past several years, you've expressed deep concerns about effective altruists' approach to doing good in the world. So let's start with a brief description of effective altruism and look at some of its history. What would you describe as the main beliefs of the effective altruism or EA community and the origins of this approach?

    Émile P. Torres 4:25

    Yeah, great question. So effective altruism, the key idea is that we should use the tools of science and reason to figure out the best ways to do the most good in the world. So that sounds quite nice and compelling at first until you sort of look at the details. And you know, some of the metrics that they use to figure out the most effective interventions to increase the good, increase value in the world, turn out to be somewhat ableist. Some of those have been abandoned over time. But also another example of an effective altruism idea that sort of looks appealing to at least some people at first glance, but in practice has had some rather negative consequences, is their idea of earning to give. So this ties into a critique of EA, that they are insufficiently attentive to the possibility of systemic change. So the idea is very individualistic. You as an individual are embedded in this system, our system being capitalism, neo-liberalism and so on. Then the question is, within this system, how can you personally do the most good?

    And so earning to give is an answer to that question. And it says that, imagine a scenario where you could go and work for an environmental nonprofit. And you know, maybe you get paid $50,000 a year or something, and you have some kind of modest impact on the ability of that nonprofit to push through climate mitigation legislation, and so on. Alternatively, you could go get a job for a petrochemical company, or some company on Wall Street, maybe make a million dollars a year. And then if you take that money, and you donate it to that environmental nonprofit, let's just say they can hire 10 people. So rather than you going to work for them to get an extra employee, you pay them all this extra money that you're making on Wall Street, and then they hire 10 extra people. And so ultimately, you've done more good. One of the criticisms of this is that working for petrochemical companies or companies on Wall Street, which the co-founder of EA William MacAskill, himself describes as quote unquote, immoral organizations.

    One of the problems with working for immoral organizations is that you compromise your moral integrity. But this is where the utilitarian kind of element of EA comes into play where we think utilitarianism, moral integrity only matters as a means. It only matters instrumentally, because for utilitarians the only thing that's important, ethically speaking, are the consequences. So there's nothing intrinsically bad about working for an immoral organization. There's nothing intrinsically bad about murder or lying or fraud. Right, it all depends on the consequences. One of the great success stories of the earn to give idea was Sam Bankman-Fried, right. So he sat down in 2012, with Will MacAskill and MacAskill told them about EA, and about this idea of earn to give and Bankman-Fried was, he either just graduated or was about to graduate from MIT. And he was, I believe, thinking of working for like an animal rights organization. And MacAskill convinced him to go work on Wall Street. So he went and did that with a bunch of other EAs like Matthew Wage, and in fact Bankman-Fried's brother, Gabriel, also worked on Wall Street for the same company, Jane Street Capital. And so after several years at Jane Street Capital Bankman-Fried decided to, as one journalist put it, get filthy rich for charity's sake by going into crypto. And obviously, he was very successful at least for a little while, on paper.

    So yeah, it's sort of funny to think about Bankman-Fried's biography, because if he hadn't met Will MacAskill, hadn't been introduced to the ideas of EA, he probably never would have gone into crypto. And he would have been just like, an unusual kind of nerdy, maybe interesting guy working at some nonprofit, rather than an individual in federal prison who's responsible for maybe the biggest case of US fraud in history. So the earn to give idea is, I think, an example of how trying to maximize the good in this kind of utilitarian manner can lead to some really bad consequences, ironically enough. So that's sort of EA. The EA movement really goes back to around 2009. So this is when the first EA organization was founded, Giving What We Can, sort of co-founded by Toby Ord and Will MacAskill, both at the University of Oxford.

    Nandita Bajaj 8:55

    Well, thank you. And I became aware of the effective altruism movement after reading a book by Peter Singer about 10 years ago, The Most Good You Can Do. And, as an animal rights advocate, I was trying to figure out how do I be the most effective person I can for the animals. And as I was reading the book, there were a lot of things that just did not sit right with me. And at the time, I didn't know the difference between consequentialism or deontological kind of philosophical worldviews. I just thought, how could someone be asking you to compromise your own moral integrity and work in a field that clearly we have evidence is creating all sorts of exploitation of people and nonhuman beings to make a lot of money so that then you can give that money to whatever charity. And, some of the things that you brought up in terms of the blind spots of the movement, is they can define what good means and they can define what effective is, and they've taken it upon themselves to decide how to combine those two to have the most positive impact. And to your point, it's so individualistic. It kind of buys into that same neoliberal model is that you are the captain of your own life. Not only that, but you can single-handedly have this incredible impact by giving through philanthropy without ever stopping to challenge any of the power hierarchies, any of the systems currently in place that have allowed them to become that rich, right?

    Émile P. Torres 10:37

    Yeah, absolutely. So consistent with that some EAs have described people like Bill Gates as the greatest philanthropist in human history. And they seem to be completely unaware that the empires of a lot of these billionaires, tech billionaires, are built on exploitation. And I think in general, a lot of EAs, maybe also dovetailing with their consequentialist, utilitarian tendencies, tend to focus entirely on the outcomes of certain interventions, without asking questions about the causes of the situation in the first place. Maybe it's precisely the system that enabled Bill Gates to make billions and billions of dollars, that system may be responsible, maybe in large part, for the plight of people around the world who are struggling. Toby Ord published a book in 2020 which was discussing a central idea within EA and especially longtermism, namely existential risk. And so these are threats that could, if they were to be actualized, erase what Toby Ord himself refers to as our vast and glorious future in the universe as we spread into space, become digital and, and ultimately maximize value. And the reason I mention this is throughout the something like 300 pages of the book, as I recall, there isn't a single mention of capitalism. I mean, some would argue, quite compellingly, that capitalism is an underlying sort of fundamental driver of the climate crisis, which according to a recent study threatens to kill a billion people this century.

    A billion people will die, according to this study, from direct effects of climate change. And capitalism also is very much at the heart of the current race to build AGI, artificial general intelligence. So that's the explicit goal of companies like Open AI and Deep Mind, Anthropic, xAI, recently founded by Elon Musk. So AGI is supposed to be the system that is at least at the same level of intellectual capability as human beings. So I think there are two main phenomena that are fueling this race to build AI. One is the sort of utopian ideologies, longtermism would be a part of that group, but also capitalism. Microsoft, and Google, Amazon, and so on, they're investing billions of dollars in these companies in hopes of making a massive profit. And Sam Altman, the CEO of Open AI, himself said during an interview with, I believe the CEO of Airbnb, that the likely outcome of AGI will be human annihilation. But, he adds, at least there will be some really cool AI companies in the meantime. And so all of this is to say that capitalism is also driving this race to build what the CEOs themselves of these AI companies describe as an extremely dangerous technology. Capitalism is behind climate change, and so on. Capitalism has a big part to play in global poverty. And yet, EAs have said virtually nothing about capitalism and its relation to this proliferation of global catastrophic risks that are completely unprecedented this century.

    Nandita Bajaj 13:57

    And we've been touching on longtermism, you know, a little bit here and there, but it'd be great to go deeper into it. So, you know, when you first hear the word longtermism, it kind of brings to mind the certain ethical value that we should be putting on the lives of future generations, kind of like the seventh generation principle based on the ancient Haudenosaunee philosophy that the decisions we make today should result in a sustainable world for seven generations into the future, right? And I would say that, sadly, we don't even seem to be doing that at all within our current paradigm structure. But this longtermism view that's kind of emerged out of this transhumanism view, is quite different and, as you've alluded to, quite perverse. And you've described some of the basic beliefs of longtermists who are perpetuating this kind of ideology. And you've named some of the prominent thinkers, you know, MacAskill being one of them. But let's start at the basics of longtermism, a little bit, you know, what are the tenets of longtermism. And who are, other than MacAskill, some of the thinkers in the movement.

    Émile P. Torres 15:12

    Yeah, so I think it might be useful to start off by distinguishing between long term thinking and longtermism. So I am a huge fan of long term thinking, and believe that we need a whole lot more of it in the world, especially given the fact that, you know, climate change will affect Earth for another roughly 10,000 years. So that's a longer period of time that civilization has existed so far. Longtermism goes way beyond long term thinking. And so part of the claim about how big the future could be, is a claim about how big the future should be. And I think this is where longtermism sort of diverges with a lot of people's intuitions about what it means to care about the long term future, and also diverges with this sort of seventh generation idea. So here's the claim. If there is a future possible life that would be better than miserable - so it would contain like a net positive amount of value, to simplify just a tiny little bit, but not much. If it would contain a net positive amount of value, then if it could exist, then it should exist, right? So 'could exist' implies 'should exist' on the condition that the life will have a net positive amount of value. So I think this idea is very counterintuitive.

    And there's two reasons that longtermists would give for this. One is that there's this very strange and highly controversial idea within philosophy, which is that there are kind of two basic ways to benefit someone. Ordinary benefits, which is just what would come to mind, like holding the door for somebody or giving somebody who is unhoused, you know, $100, or something like that. That's a benefit, in an ordinary sense. But then there's these existential benefits. So another way to benefit someone is that if they will have a, you know, half decent life, bringing them into existence in the first place, benefits them. So then there's a question of like, well, you could imagine yourself in a situation where like, Okay, I have two choices. I can benefit someone by giving them, this person who already exists $100. Or I could benefit somebody by not giving them $100, but instead having a child or doing something that would bring someone into the world. And maybe the second option might be better in some circumstance. So that's pretty counterintuitive, I think, to most people. But that's part of the claim. And the other claim really has to do with utilitarianism, because our sole moral obligation, according to utilitarianism, is to maximize the total amount of value that exists in the universe as a whole. So then there are two ways to maximize the total amount of value. One is you could say like, okay, within a population of people, like the population of individuals who currently exists, I'm going to increase their wellbeing, make them happier, whatever that requires - giving them money, or better living circumstances, and so on. Another option is just to create new people, right, insofar as those people are going to be quote, unquote, happy, meaning they don't have miserable lives, then that's a second way. So utilitarianism says you should do both.

    And consequently, this is why longtermists are so obsessed with calculating how many future people there could be. Because if there could be an enormous number of future people compared to current day people, then maybe the best way to maximize the total amount of good is to kind of like not worry about current day people, and focus on how your actions today are going to ensure the realization of these future people. And these ideas that I'm referencing were ideas that a philosopher named Nick Bostrom had articulated in the early 2000s. Drawing from modern cosmology, basically, that Earth will remain habitable for a really long period of time. Right? So we have about another billion years. Life has been around for 3.8 billion years. We have another billion years to go, and our species has been around for 300,000 years. So that's a huge future. And the first person to calculate how many people could exist in the future, I think, was Carl Sagan, back in 1983. He said, If we survived for another 10 million years, there could be 500 trillion people. So that's just like an enormous number of future people. By comparison, there have been about 117 billion humans who have existed so far. So a much, much larger number of future people than past or present people.

    But basically Bostrom pointed out that what if we spread beyond Earth? Then there could be a much, much greater number than 500 trillion. And so he calculated that if we spread into space, and become digital beings, living in computer simulations, so computer simulations could house a much larger population than if we just live on terraformed exoplanets - so other planets out there that we make like Earth. And he estimated that within the universe as a whole, there could be 10 to the 58th people. So that's a one followed by 58 zeros. Again, much, much larger than 117 billion, much larger than 500 trillion. So all of that is to say, if you combine those claims about how big the future could be with the effective altruists' imperative to do the most good, you get the following line of reasoning. If your goal is to positively influence the greatest number of people, and if by far most people who could exist will exist in the far future, then maybe what you should be doing as a good rational altruist is focusing on the very far future, not on the present. And it's by virtue of how huge the future could be, if we colonize space, become digital beings, and so on, that all contemporary problems that do not threaten the realization of this huge future, just fade into basically nothingness. So this is the idea of longtermism, which emerged out of the EA movement. And there's a great tension between this cause area within EA of longtermism and the initial cause area of alleviating global poverty.

    Because alleviating global poverty, like yes, that's going to help, I think there's 1.3 billion estimated in multi-dimensional poverty. So yes, that is a huge number in absolute terms. But in relative terms, that is just a tiny fraction of the total number of people who could exist if you take this grand cosmic view, across not just space, but across time, into the future. And that is where I think it's just very counterintuitive. And like I've mentioned before, it diverges with these other notions of like the seventh generation, which generally sort of presupposes the existence of future people, while also acknowledging that we can't really anticipate what the far future is going to look like, or what people in the far future are going to want. So the seventh generation, one thing that's really nice is that renews every generation, right? So every generation is thinking about seven generations ahead, and you have this sort of chain, you know, this link that extends into the future, whereas the longtermists sort of have this vision about what ought to be in millions or billions or trillions of years from now. So there's a big tension between those two views. And so that's the history of EA and how longtermism emerged out of it.

    Alan Ware 21:59

    Yeah, I think that techno-optimist view that they have an assumption that history and the future are fairly linear, and will play out in a fairly orderly rational type of process. Meanwhile, their ignorance of the social inequalities, potential revolutions brewing below them, the ecological damage potentially leading to collapse, they become blind to the non-linearity possibility of just collapse - the whole system that is feeding them through earn to give where they're going on Wall Street, or Silicon Valley. And they're learning a lot about mergers and acquisitions and algorithms, and thinking that they know more than the NGOs themselves, but they really just have the power. They have the money power. And that gives them the right to set the table on that and to be blind to these kinds of churning collapses underneath their linear view that very much threaten the return to a more cyclical view of history, of human history, that isn't onward and upward. You know that artificial intelligence itself could be, you know, a Great Depression. A stock market collapse could really suck a lot of the capital out of tech. The magnificent seven on the stock market right now could evaporate overnight. And so there is an assumption that money will just keep flowing. Cultural power will just keep flowing. And no doubt AGI is doing a lot. It is quite powerful. But underneath them is a biophysical substrate of energy and materials. That reality is not all information, as a lot of them seem to think we can live entirely in in the world of ideas and concepts, and not appreciating that meanwhile, the biophysical substrate that supports all of that is eroding, and in danger of collapse. And it just feels like a real form of blindness - ecological, material, energy, blindness, and hubris. And arrogance that you can maintain for quite a while because you are the money power. You are the cultural power. But at some point, a lot of that can be pulled from under you, which I suppose any power is often blind to its own weaknesses.

    Émile P. Torres 24:08

    I think that's right. The linear view of history is very prominent. I mean, I maybe would describe this as a component of a kind of colonialist mindset, which I think is really influential, although not explicitly recognized as such, at least not normally, within the general EA community, but definitely this linear view of history. You know, we started in this quote, unquote, primitive state of hunting and gathering and then we advanced to an agricultural state, and then we advanced further through industrialization and so on. And one of the valuable contributions that the co-authored book, The Dawn of Everything, makes is calling into question this sort of linear view - saying, actually, you know, there's some people that experimented with agriculture and then decided, agriculture is actually much worse. It's a much worse life. What the EAs and longtermists do is then just extend this very one-dimensional, I would describe sort of impoverished way of thinking about the history of human development, and extend this into the future. And so just the next obvious step is, again, consistent with the colonial mindset is spreading beyond Earth, and colonizing space. And there are all these vast resources out there that longtermists like Nick Bostrom complain are being wasted every day, you know, stars out there that are burning up their reserves of hydrogen. All that energy is being wasted, quote, unquote. It's not going to fuel valuable enterprises like creating, or sustaining huge, literally planet-size computers, running virtual reality worlds full of trillions and trillions of people.

    Alan Ware 25:42

    That's interesting, because that fits one of the theorists of ecological overshoot, William Catton, who was a sociologist who talks about takeover, which was the colonial process of mainly Europeans taking over land, taking over people, taking over the materials, and then drawdown - taking down the minerals, the stuff under the crust, in our case, fossil fuels, but on Mars, the colonial takeover of planets and then draw them down, suck the energy and materials out of them. That's an interesting analogy.

    Émile P. Torres 26:12

    So I would say that EAs and longtermists in general, have pretty habitually underestimated the significance of environmental destruction. You know, they don't really see it as an existential risk, right? Like, maybe it's going to contribute to existential risk, and that's why we should be concerned about it. And I think part of it is that their view, like in practice, is pretty anthropocentric. And they would say, our ethics because you know, utilitarianism, it's not anthropocentric. It's what they would call sentio-centric. So it centers any kind of sentient entities, right? The thing is that humans are, as it were, more sentient than other beings. And so we end up mattering more than other creatures. And so, just sort of fitting together with this in a certain way, you find Will MacAskill writing in his 2022 book, What We Owe the Future, that our destruction of the environment might actually be net positive. And the reason is that because we care about sentience, and therefore we care about the experience of suffering, not just in humans but in other creatures, because there's a lot of suffering in the wild among wild animals. This is debatable, but this is a premise to the claim. The fewer wild animals there are the less wild animal suffering. So consequently, by going out and obliterating ecosystems, razing forests, polluting the oceans, and so on, you know all that sounds bad. But that just means there are fewer wild animals to suffer. And obviously, the limit of that is, well, in the future maybe the best thing to do is just get rid of the biosphere altogether. Maybe we all become digital beings. Maybe they're no animals. Maybe we simulate animals in the virtual reality that we all live in. But that's one issue that's kind of shocking and relevant to what you were saying, I think.

    Alan Ware 27:59

    Well, you had a great example, in one of your essays, you quote MacAskill, from What We Owe the Future where he mentions that even 15 degrees of warming, the heat would not pass lethal limits for crops in most regions. And then you consulted several experts, agricultural and climate change, who said that's just pure nonsense. 15 degrees Celsius, is that really Celsius?

    Émile P. Torres 28:21

    Celsius. Just to be clear, Celsius. So that leapt off the page and smacked me in the face when I read it, because that is absolutely wild. I mean, again, the most recent studies, say, with just the warming that's expected this century, two degrees, 3 maybe 4 billion people will die. Two billion will be displaced. Maybe those two numbers will overlap but a lot of people displaced will die. But I mean, that is just unfathomable. And I spoke to a climate expert yesterday who's a friend of mine, and he was pointing out that, even if there were just like, if there was an event, maybe a climate-related event, or maybe an event resulting from some of these AI systems that killed 2 billion people, in a relatively short time, like what civilization survive? Like, just really reflect on like the mayhem, the shock, the psycho-cultural trauma of something like that. I mean, that would just be extraordinary. And again, that's a small number compared to the 1 billion who are expected to perish by the end of the century.

    Nandita Bajaj 28:21

    You've talked about how there is kind of a religious type of strength to it and belief system in terms of the ideology of both effective altruism and longtermism, and that they emerged, basically out of the hole that religion left, and they were trying to fill that same hole with this kind of godlike, messianic, techno-utopian, grandiose vision of a world and that they are the gods that will bring us all there. Well, not us, but the future trillions of disembodied people living on servers, as you said, where there is no life. There's no suffering and you know, kind of a very anti-life, antinatalist view of the universe. I just wanted to make that comment of just how many parallels one can draw between this kind of completely ecologically blind, human supremacist ideology, and many of the religions that also seem to share that ecological blindness and human supremacist worldview.

    Émile P. Torres 30:36

    So yeah, you're exactly right. I think that one way of understanding the development of EA and longtermism is with respect to the transhumanist movement, which predates EA and longtermism. So EA sort of, in a sense, its roots go back to two different phenomena. One is transhumanism. So Toby Ord, for example, was co-authoring papers with Bostrom back in 2006, several years before EA was created, that we're defending the transhumanist worldview. And the other strand of EA goes back to Peter Singer's work. And I think a lot of Singer's work is really quite problematic. And he's a utilitarian who takes his utilitarian ethics very seriously, and follows the premises of utilitarianism to their logical ends, which leads him to say things like, if there are babies, infants with certain kinds of disabilities, then the morally right thing to do might be to kill them. So anyway, so those are the two strands and with respect to tranhumanism, the reason I mentioned that is that this was an ideology developed in the early 20th century, explicitly as a replacement for traditional religion.

    You know Christianity was absolutely dominant within the West, from roughly the fourth or fifth centuries of the Common Era - so a few centuries after Jesus lived and died. That's when it became really widespread in the Roman Empire, and then it just dominated all the way up until the 19th century. That's when it started to decline. It was that century that Karl Marx denigrated religion as the opium of the masses. Frederick Nietzsche famously said, God is dead. Why is he dead? Because we have killed him, you know, through science, and basically, branches of theology, you know, started looking into the historical reliability of the Bible, and so on. And the results weren't very good. So anyways, Christianity declined. And it is extraordinary if you read the literature at the time, this is around the time that the term agnostic was coined, as well. So there are a lot of agnostics and atheists who are just reeling from the loss of religion, like all of the meaning and purpose and the eschatological hope, so hope for the future of humanity that traditional religion of Christianity provided, all that was gone. So the atheists were wondering, like, what is the point of any of this? How do we live our lives? Like what is the purpose? Darwinianism says that we emerge just by happenstance, through contingent evolution. And then physics tells us that the universe will become uninhabitable in the future. So why? Why are we here?

    And anyway, so I mention that because it was right around that time that you get a bunch of ideologies that offered basically the same promises that Christianity did, but are secular in nature. So this is when Marxism emerged. And there's the promise of a kind of utopian future once we get this worldwide communist state. And, yeah, the parallels between Marxism and Christian eschatology, their narrative of the end of the world, are very striking. Also, then transhumanism emerged. And the first book that really developed the transhumanist idea, although it didn't use that term, I believe the term was evolutionary humanism, was by Julian Huxley. And it was revealingly titled Religion Without Revelation. So it was like this transhumanism, here's a new religion, you don't need faith. Actually what we need is just to rely on science and technology. And so you know, transhumanism basically offers the possibility of heaven on Earth by using science and technology to reengineer humanity. So there's the promise of immortality, you know, the abolition of, or at least the significant reduction of, suffering in the world. And, you know, if you fast forward to the rise of modern transhumanism, as opposed to this early transhumanism from the earlier 20th century, so modern transhumanism really emerged in the 1980s, 1990s.

    So right around that time, you also get the promise of resurrection. So the first people to articulate the modern transhumanist ideology were involved in cryonics. So if you don't live long enough to live forever, as Ray Kurzweil says, You don't live long enough to get access to these technologies, then you can always just have your body cryogenically frozen. So it can be resurrected at some point, you know, maybe 2150 or whenever we have the technologies available. So I think it's a very facile way of, you know, knocking an ideology is to describe it as a religion, right? Lots of people do that. Wokeism is a religion. Conservatism is a religion, and so on and so on. But in this case, transhumanism, it really is like very much a religion. And that really is the foundation I think of EA and definitely longtermism, which basically just subsumes transhumanism within it. Which is another way to say that longtermism builds on transhumanism.

    Nandita Bajaj 35:22

    Right. Well, the one thought I keep having is, it doesn't take very much to understand how perverse and just how self-aggrandizing the movement is. And yet, I'm part of the animal rights movement, you know really sophisticated minded people, who have really caught on to the EA philosophy. In fact, a lot of animal charities are based on EA principles. And MacAskill has become a real hero in a lot of these movements. In fact, you might even know that this data analysis site, Our World in Data that gets you know upwards of 80 million people each month in terms of website visits, they are very much influenced by the effective altruism movement. They get a ton of funding from EA, from the Gates Foundation, from the Musk Foundation. And yet they are seen as the go-to data analysis and interpretation site. And of course, they're getting their data from reliable sources. But it's the interpretation of the data that really reveals their biases. And they have the same kind of techo-fundamentalist worldview, that things are getting better and better and better. And we just need more technology to help us get out of these catastrophic predictions. And how is it that so many people have fallen into this trap? What is so attractive about it? Is it the godlike qualities of the promise of a utopian future that so many young and intelligent, sophisticated educated people are buying into this?

    Émile P. Torres 37:06

    That may be part of it. I mean, on the one hand, this kind of techno-solutionist approach has an obvious appeal to people in the tech world. You know, it tells them that they're the answer to all the world's problems. But also, I mean, tying this back to this idea of transhumanism and longtermism as a sort of religious worldview. Another parallel that I didn't mention was that many people in the community see AGI or super intelligence, a version of AGI that is not just human equivalent in its capabilities, but far superior than us. They see super intelligence as, to borrow their term, God-like AI. So one way of like reconstructing this is if God doesn't exist, why not just create him? Alternatively, why not become him? So the reason I mention that is once you have God, then if God loves not his children, but his parents - again using 'he' because most of the people in this community are men, overwhelmingly white and overwhelmingly male, which definitely ties into one of my other critiques of this community. But you know, if we create this God that loves us, then it will do whatever we tell it to do.

    And also a crucial idea here is that on the sort of longtermist, EA, transhumanist view, pretty much every problem in the world is understood as an engineering problem. And so you know, what makes a good engineer? Well, they would say intelligence. A more intelligent person's going to be better at engineering than a less intelligent person. I think this notion of intelligence is deeply problematic, but I'm just gonna go with their premises here. So consequently, if you have a super intelligence, then it will be capable of super engineering feats. And since everything is an engineering problem - including climate change, wars, social upheaval, social religious conflicts around the world, and so on - once you have super intelligence, then you can just task it with solving all these problems. And it will do that. Maybe it'll take five or 10 seconds to like, think and go, Okay, I have a solution to climate change, as if we like don't know how to solve climate change now. It's really just a political will problem and a coordination problem. So yeah, I think this techno-solution, it's appealing to tech billionaires for this reason. I think there's also this widespread notion that, which ties into this linear view of human development, and it's sort of techno-deterministic to use a sort of technical term, this notion that there are no brakes on the technological enterprise. There's only one way forward. So if you were to believe that, okay, technology got us into this mess, or at least enabled us to get into this mess, but more technology is going to help us get out of it. That fits very nicely with this view that well, we can't stop technology anyways. As opposed to if you hold the opposite view like my view, which I think building more and more technology is probably just going to make things worse. It complicates our threat environment even more, making it even more unmanageable and intractable and so on, and that this is basically a dead end road.

    Alan Ware 40:09

    Yeah, we've had many guests on talking about more ecological knowledge, whether indigenous or western science and having more of an ignorance-based worldview as we learn the complexity of how plants talk to each other with fungi, the animal behavior, the things that we're learning, because we really only had ecological science, western ecological science for maybe 100 years. So this technological, rationalistic, technocratic engineering mind, meanwhile, just blunder busses forward, creating a trail of problems in its path. And they still have this enormous kind of arrogance and hubris that isn't grounded in an ecological humility. There's very little humility. And it reminded me of Marc Andreessen's Techno-Optimist's Manifesto, where he's saying things like, techno-optimists believe societies, like sharks, grow or die. We believe everything good is downstream of growth. It's all about growth and technology and moving forward and not dying. Anything else is stagnation. And so, very linear that way to progress, right, onward and upward. There's no life cycle.

    Émile P. Torres 41:20

    Yeah, absolutely. There is sort of a great irony in the longtermist literature, something I was very aware of when I was part of the community and contributing to this literature, which is that there is a widespread acknowledgement that technology is overwhelmingly responsible for our unprecedented predicament these days with respect to the risks that we're facing - global scale risks. But the idea is that more technology - so this much technology is bad, it gets us into all sorts of problems, but a bit more technology is going to save us. In fact, this also ties into this idea of everything being an engineering problem. So one way that a lot of longtermists couch or frame our current situation is that we have all of this technological power without sufficient technological wisdom. You know, we're just not wise enough as a species to wield this technology in a safe way. So we realize all the great, wonderful utopian benefits while neutralizing all the risks. Okay, so if that's a problem, and if all problems are engineering problems then that's an engineering problem. So how do we solve this problem of a mismatch between wisdom and our technology? Well, we just reengineer humanity.

    So you know, one of the the main solutions put forward to this problem is that we should try to develop human enhancement technologies, particularly cognitive enhancement technologies, so that we can radically enhance our capacity for wisdom, and thereby putting us in a position to responsibly use these technologies. So Eliezer Yudkowsky would be one of many, many examples where he's very worried about AGI being developed in the near future and that causing our extinction. He's literally said in podcast interviews that if we were a sane society, as he puts it, we would ban all work on AGI right now, take a lot of those resources and reallocate them towards developing these technologies to create a new, cognitively, intellectually superior post-human species. This also ties into capitalism. Again, you have these utopian ideologies, and then this capitalist ideology. And one way of understanding transhumanism is not just as a utopian possibility, not just as a solution to the problems that we now confront as a result of technology, but also as a means for capitalism to continue its growthist march, because right, we're pushing up against all these limits of resources. Well, there's another resource that hasn't really been tapped, and that's the human organism. So if you want to keep the engines of economic growth and productivity roaring, one way is to ensure that the individuals who are part of that engine, part of that system, that they are more productive. And so by reengineering humanity, maybe you could create organisms that are even better little capitalist agents. They're even more productive. They're even more efficient. They're better at optimizing tasks and you know, and so on.

    And Will MacAskill references this in his book, What We Owe the Future, that global depopulation - something he's very worried about, same with Elon Musk and the others - so global population decline, this is a big concern because it could result in economic stagnation. Well, what can we do then? Well, we could just reengineer humanity. We just create designer babies, ensure that they are all as quote unquote intelligent as Einstein, or he says that if that doesn't work, then we just create new AGI, artificial general intelligences at the human level to replace workers in the economic system. So I hope all of this ties in. Like transhumanism, it's again, it's reengineering humanity is one way to say solve the problem of global catastrophic risk, that technology itself is responsible for overwhelmingly, but it's also, transhumanism that can be understood as just an extension of techno-capitalism. This is an argument that a friend of mine, Alexander Thomas makes in a forthcoming book, which is really compelling. So you know, we're just another, you know, Heidegger would say, standing reserve - reserves to be exploited in order to keep this juggernaut of capitalism moving forward.

    Nandita Bajaj 45:24

    So the couple of things that have, you know, emerged for me is people are now really starting to buy into this depopulation panic as a result of fertility rates declining because of greater gender equality, and you know, women finally having the autonomy to decide for themselves whether or not they want children, and if so, how many. And we see reproductive rights and environmental rights completely intertwined, right. When reproductive rights are under attack through patriarchal oppression, it's the same patriarchal oppression that is extended toward the Earth, you know, in the form of, as you've said, neocolonialism and capitalism and extractivism of the planet. For the longest time, we were just concerned about the patriarchal, conservative control of reproduction, in order to create either bigger empires, bigger states, bigger capital, more conservative tribes. And we thought with feminism and with liberal values, we can push against these. And that's what we need to do. And now we've got this new branch of people emerging, who apparently call themselves the secularists or liberals, right, who are now also feeding into the same depopulation panic that a lot of nationalists and conservatives are feeding into. So in both cases, whether it's the far right, or the longtermists, they're both looking at women's reproduction as vessels through which these ghastly futures will be realized. And that's really scary. You know, they're both extremely pronatalist groups.

    Émile P. Torres 47:10

    It is really interesting to see this sort of alignment of these different groups. It's fascinating, and also a bit alarming, I think, one is that with respect to the more sort of political right groups that are anxious about depopulation, a lot of worry about the great replacement. Also, you know, I think a lot of longtermists would say that part of the reason population decline is so unfortunate is that there is a positive correlation between population size and innovation. And since climate change, environmental destruction is an engineering problem, ultimately, if you have less innovation, then you're less likely to stumble upon, or to create a solution to these problems of environmental degradation. So consequently, the claim then is that if you want to solve the climate crisis, etc, you should actually want there to be a larger population, because that means more innovation. More innovation means greater chance of actually solving it.

    Nandita Bajaj 48:07

    The tenets of free market fundamentalism.

    Émile P. Torres 48:11

    Absolutely. There's a lot of that sort of fundamentalism, I think in this crowd. Libertarianism is very influential, has been from the origins of modern transhumanism in the 1980s, 1990s, the first organized transhumanist movement being the extropian movement. And they were explicitly libertarian. Ayn Rand's Atlas Shrugged was on their official reading list, and so on. And I think that libertarian tradition extends through EA to longtermism today, which is not to say that every long termist is a libertarian, but many are. So ultimately, even if there are different reasons for being concerned about population decline, there is this kind of fascinating kind of alignment of different groups, yeah, ultimately ending up at the same general conclusion that we should be really worried about population decline. And at exactly the moment when you have climate scientists, many of whom are starting to scream that we have too many people. And we need to reel in these sort of growthist tendencies that are at the root of our socio-economic system.

    Alan Ware 49:13

    Yeah, they're concerned with technological innovation. Just in terms of an effective altruist, you'd think they would care enough about the education of the billions that could be educated so much better in this world, to further innovation. So there are so many children getting virtually no education, that if you just poured your effective altruism into those children, truly poured it into them in a significant way, you wouldn't be playing a numbers game with technical innovation. So it's interesting, the disconnect they had there just counting up humans, and yet, they also as EA or longtermists presumably care about the utility of all the humans on the planet. But here are these humans existing now who could be my more improved through education, and help their technological progress that they're so worried about.

    Émile P. Torres 50:06

    On the EA account, and I think this largely goes for transhumanism and longtermism as well, you could really see ethics as a branch of economics, right? It really is just about crunching the numbers and so on. And the fact that there's such a focus on number crunching means that there's a very strong quantitative bias. Consequently, when you consider interventions like improving education, in certain parts of the world that are impoverished, it becomes really difficult to put numbers on the outcome of those interventions. Consequently, those interventions get sort of de-prioritized, or they just don't fit in the quantitative metric framework that EA embraces. And, yeah, so then ultimately, you might, as an EA conclude that maybe focusing on education is not the best way to go, because there's a lot of uncertainty.

    Alan Ware 50:56

    Well, it's interesting with the Gates malarial bed nets example, right, where the unintended consequences of a lot of those people using those nets to overfish, was it Lake Chad? And they were using the nets for other things, right? Because they thought, wow, this is a great net. Now, I don't have to sew a net together to find fish. You kind of maximize these certain metrics, and then you're blind to all these unintended consequences.

    Émile P. Torres 51:20

    Yeah, I think it's a problem with a simplistic kind of one size fits all approach. And this has definitely been a criticism of western philanthropy, global north based philanthropy, is that people do come in with this notion that oh, if if this program worked in region A, of some part of the world that it's going to work in regions B through Z. So this is one argument for actually why this whole kind of top-down approach to philanthropy is not good. And maybe the best thing that you could do is to fund grassroots organizations that have ground level understanding of the particulars of their predicament, and why individuals are trapped in a cycle of poverty. And also just another thing to add, because it's shocking but relevant here, is that this numbers game is precisely what leads one of the founders of longtermism, Nick Beckstead, to argue in his PhD dissertation which is widely regarded as one of the founding documents of the longtermist ideology. He argues that if you are in a situation where you have to choose between saving a life of somebody in a rich country, versus saving a life of somebody in poor country, you should definitely save the life of the person in the rich country. Because from a longtermist perspective, lives lived in rich countries are much more valuable. Rich countries are more economically productive. Ultimately, they're just better positioned to influence the very far future, then lives in a poor country. So to tie this into what you were saying is like, if you're in an EA longtermist, and consequently, what you care about the most is that things go well in the very long run future, which means not just ensuring that people in the future have a decent life, but that they exist in the first place, because again, 'could exist' implies 'should exist' assuming that they have a half decent life. Then taking your finite resources and spending them on programs that would improve the education of people in impoverished regions where maybe that's just not the best way to go about things. Again, a life saved in a rich country, it should be prioritized over life save in a poor country. I hope that makes sense.

    Nandita Bajaj 53:29

    Yes, and of course, a reflection of an extreme version is the Collinses and they've taken it into their own hands to repopulate the world with their own genetic material, given that it is the most superior and rich and intelligent and, and all that, you know. They've kind of talked about as long as each of their descendants can commit to having at least eight or 10 children for just 11 generations, their bloodline will eventually outnumber the current human population. So, again, I'm so shocked that so many news outlets have given them a platform to share their idea, and Elon Musk has retweeted what a great thing they are doing in terms of service to humanity. And it feeds very much into the, again what we were talking about earlier, the conservative depopulation panic of the great replacement fear of being overtaken by the wrong kind, the wrong color, the wrong religion of people. And basically taking matters into your own hand and instead of educating people raising people out of poverty, and really just proliferating more rights-based, justice based values, they're saying, Well, no, we know how to create the right kind of people and we're going to do it.

    Émile P. Torres 54:57

    Talk about hubris. More of me is what's going to save the world. So hubristic. I mean I think you mentioned Elon Musk. He, as I understand it, has supported the Collins. But of course, he has a bunch of children. And he sees himself as playing a role in this. But the Collins also, you know, their organization pronatalist.org has received, if I remember correctly, hundreds of thousands of dollars from leading figures within the longtermist, EA, transhumanist community like Jaan Tallinn. So Jaan Tallinn is a co-founder of Skype, a multi-millionaire. I believe he has just under a billion dollars, so almost a billionaire. But he's been a major funder of the AGI. So it's a very bizarre moment when we need more people. We need more of a certain type of people and you know the Collins have this very strange, like extreme, what scientists would call hereditarian view that, you know, a lot of our traits as individuals are based in our genes. I mean, there's a trivial sense in which that's true. I'm talking about a single gene might have all sorts of consequences. But the claim is stronger than that. It's that there are particular traits that are determined by our genes. And the Collins, I remember reading an interview with them, where they were saying that they believe that even ideology is genetically based, at least to a significant degree. And so, you know, if you see that there are a bunch of Nazis in the world who are reproducing, well, that's bad because their children have a good chance of being Nazis for genetic reasons. You know, not just cultural reasons, but genetic reasons. So that is just a really extreme view of it. I've lived in in Germany for three years recently. There are lots of people my age whose grandparents were Nazis. They are not Nazis. This is not genetically determined. But even traits like intelligence, like is that genetically determined? I don't know. Intelligence is like such a complex phenomena. There's so many different types of intelligence. There are so many genes interacting in complex ways, that it's just deeply problematic to say, Oh, well, you know, I scored high on this very impoverished, narrow one-dimensional test called the IQ test, which you know, measures intelligence in some meaningful way, which I strongly disagree with, but I scored high on it so I should have more children because they're going to be high IQ, and the higher the average IQ of our society, the better the society's gonna become. It's all just deeply problematic from a scientific perspective to this higher level kind of sociological point of view. So all of it's just really bizarre. So we're in such a bizarre moment.

    Alan Ware 57:33

    How do you see after Sam Bankman-Fried and the humiliation of that and the exit of him and his billions, where is the EA movement and long termism now, do you think?

    Émile P. Torres 57:44

    Great question. Beginning in the summer of 2022, when MacAskill published his book on longtermism, there was a big push to evangelize for this ideology among the public. And I think for several months, it was, in general, very successful. So MacAskill, the effective altruist movement, and this spin off ideology of longtermism was getting coverage on the front page of Time Magazine, New York Times, The Guardian was writing articles on it. MacAskill himself, made an appearance on The Daily Show with Trevor Noah and Trevor Noah seem to be very enamored with MacAskills' longtermist views. So I think this sort of outreach effort was very successful. And then, at exactly the worst moment for this whole project of trying to convert people, members of the general public to the longtermist ideology, FTX collapsed. And I think that sort of undid all of the progress that MacAskill and other EA longtermists had made, and perhaps in a way that was irreversible it sort of tarnished the image of EA and longtermism. So all of that is to say that I think EA now is tightly linked with Bankman-Fried and arguably the worst case of fraud in US history. The same with longtermism, and it consequently, I think the general public has lost interest for the most part in EA. And I know there were a lot of people who initially found EA and longtermism very compelling and appealing, who now kind of want nothing to do with it. That being said, EA still remains a really powerful, influential force within the political arena and especially I think, within the tech world. You know, so there are a lot of powerful people like a lot of individuals like AI researchers at companies like Anthropic, which is mostly just an EA organization, who still very much subscribe to the EA worldview. And before FTX collapsed the EA community had 46.1 billion in committed funding, just for like its own projects. So when FTX collapsed, a good chunk of that money was lost. But there still remained billions and billions of dollars. I mean, this was made explicit by leaders of the EA community to community members, that there still is just lots and lots of money, for research projects, and so on.

    Alan Ware 1:00:21

    So with our little bit of time left, we wanted to ask you about the essay that you wrote in Medium last year, titled Why I Won't Have Children? Could you share with us some of the reasoning behind that decision?

    Émile P. Torres 1:00:33

    The way that I sort of couch that article is, when I was young, I had this assumption that the world was in general, a good place. And then there was somebody who told me about starvation around the world, and certain diseases like brain tumors, that young children, you know, basically my age at the time would get, and that was an occasion for me to reflect on the possibility that actually, maybe the world is a bit more menacing than I had thought. And so the article basically just could be seen as a progress report on my thinking about this issue. And, you know, for decades, I maintained this belief that, in general, the world is a good place. By the time I got to my early 40s, I'm 41 now, having, you know, seen a bunch of close friends of mine die young, some as a result of suicide, and then just taking a broader view of the situation of humanity in the 21st century, and recognizing the extraordinary unprecedented perils of climate change, and the sixth mass extinction event, the risks associated with the development of emerging technologies. And so all of this is to say that when I pivot from thinking about experiences that I've had, myself or vicariously through friends, to looking at just the broader situation of humanity this century, it all just looks really bleak. And I don't want that to be the case. But it's a conclusion that I just can't really resist. And so that is why I have decided not to have children. But also, one thing I emphasize in the article is that I am not at all in any way judgy about people having kids, and I would be mortified if anyone misconstrued the claim otherwise. Like people have all sorts of different reasons. And one thing I mentioned in that article is that there really can be reasons for particular people that aren't necessarily just general universal reasons for everyone. So basically, the article was just presenting, like, here are the reasons why I've decided not to have children, because the world is just an obstacle course of hazards and tragedies and booby traps, as it were. But certainly, like, I recognize that there is diversity of opinions on this matter and feelings and that questions about having children get at some of the most intimate personal tendencies and beliefs that people might have. So it's not a general prescription for everyone don't have children. It's more just an explanation. Here's why I'm not going to. I spend the last 10 plus years studying the future of humanity. And it all looks pretty bad.

    Alan Ware 1:03:18

    And yeah, you do mention despite the sadness, and unfixable brokenness of the world, you still remain pretty involved and optimistic. And you say, I don't believe in hope, but I do believe in duty. How do you interpret that?

    Émile P. Torres 1:03:33

    So I became acquainted, to my own amazement, with this fascinating and extraordinary author named Amital Bosch. So that is his line, which is like, okay, yeah, maybe I don't have much hope. But I do have a sense of duty. And that just articulated my view in a really wonderful, succinct and powerful way. Which is, there's a sense in which the bleaker one assesses the world, the greater one's motivation to be a decent person, be a kind, compassionate person. So all the more reason that you should pursue these ends to be good. One response to a sort of bleak assessment of the world is nihilism and defeatism and so on and just sort of slumping into a corner and say, well, everything's effed. So what's the point of trying? For me it's the exact opposite. It's like, yeah, the the worst things are, the more one should be out in the streets and protesting, being an activist for good climate mitigation policies, and also just in the sphere of one's personal life to be just a decent human being, and to really reflect on how one's actions affect those around them in a good or bad way. So that's an important part of the article, in addition to me just saying, oh, yeah, everything's like actually pretty awful.

    Nandita Bajaj 1:04:52

    And we obviously pick that up because it is probably a close reflection of how we see our role as an organization of both telling the truth about our ecological predicament and the social and ecological catastrophes that are descending upon us, but at the same time maintaining a certain sense of duty or moral obligation to still do our very best to stay active in the movement. And to give people like you say not hope so much, but a sense of purpose to minimize suffering that is all around us and that is increasingly going to be occurring with these catastrophes. So we were happy to see that and we were just delighted to have had the chance to talk to you today. It was a really fascinating conversation. I could feel my mind expanding, trying to keep up with both the enormity of the situation but also the depth of knowledge that you bring to the movement and the critique of the movements. Thank you so much for taking the time to join us today. It was really nice to speak with you.

    Alan Ware 1:06:01

    Yes, thank you.

    Émile P. Torres 1:06:03

    Thanks so much for having me. It was wonderful talking to you. So thank you so much.

    Alan Ware 1:06:07

    That's all for this edition of the Overpopulation Podcast. Visit populationbalance.org to learn more. To share feedback or guest recommendations, write to us using the contact form on our site, or by emailing us at podcast at populationbalance.org. If you enjoyed this podcast, please rate us on your favorite podcast platform and share it widely. We couldn't do this work without the support of listeners like you and hope that you will consider a one time or recurring donation.

    Nandita Bajaj 1:06:36

    Until next time, I'm Nandita Bajaj thanking you for your interest in our work and for your efforts in helping us all shrink toward abundance.

More like this

Previous
Previous

Hidden: Animals in the Anthropocene

Next
Next

Countdown: Our Last, Best Hope for a Future on Earth?