Catastrophe Ethics: How to Choose Well in a World of Tough Choices
In this episode with bioethicist and moral philosopher Dr. Travis N. Rieder, we discuss his latest book Catastrophe Ethics, in which he explores how individuals can make morally decent choices in a world of confusing and often terrifying problems. We explore the morally exhausting and puzzling nature of modern life in which individual actions can often seem insignificant in the face of massive and complex systems. Rieder offers suggestions on how to overcome this sense of ‘moral dumbfounding’ so that we can better align our actions with our values towards ethical living. Among the small and large individual actions that we discuss, Rieder places a special focus on the ethics of procreation — what he calls monumental ethics — and the degree of moral deliberation that is needed to arrive at the decision to have a biological child. We also discuss the dangers of utilitarian ethics, with a specific focus on Effective Altruism.
MENTIONED IN THIS EPISODE:
-
Travis Rieder 0:00
By and large, if you make a new person who can then go on and make new people, but also he's gonna have a lifetime with their own actions and their own choices, that is the most impactful thing you can do along a whole bunch of dimensions. And part of being a parent is sort of placing your kid above everybody else. There's a great benefit to that, because that's a special part of the parenting relationship. But it's not cost free, right? By virtue of becoming a parent, I'm spending more resources on this one human that I could be spreading out to others, right? So investing a huge amount of financial resources, and then not to mention, like the time and all of your energy, etcetera. So all of that stuff together is why I take this to be sort of monumental ethics. It's not affecting tiny little things that happen all the time. Instead, you're not going to make this decision very often - maybe only once in your life, maybe only a few times in your life - and it's going to change everything. And it's not just everything kind of pragmatically, it's going to change everything morally to about what you value, how you act, and how you think you should act
Alan Ware 0:57
That was bioethicist and moral philosopher, Dr. Travis Rieder. In this episode of the Overpopulation Podcast we'll be talking about his latest book, Catastrophe Ethics: How to Choose Well in a World of Tough Choices, in which he offers us tools on how to make more ethical choices in a world where the size and complexity of the challenges we face can make our moral efforts feel meaningless.
Nandita Bajaj 1:30
Welcome to the Overpopulation Podcast, where we tirelessly make ecological overshoot and overpopulation common knowledge. That's the first step in right-sizing the scale of our human footprint so that it is in balance with life on Earth, enabling all species to thrive. I'm Nandita Bajaj co-host of the podcast and executive director of Population Balance.
Alan Ware 1:53
I'm Alan Ware, co-host of the podcast and researcher with Population Balance, the first and only nonprofit organization globally that draws the connections between pronatalism, human supremacy, and ecological overshoot and offers solutions to address their combined impacts on the planet, people, and animals. And now on to today's guest. Travis N. Rieder PhD is associate research professor at the Johns Hopkins Berman Institute of Bioethics, where he directs the Master of Bioethics degree program. He also has secondary appointments in the department's of philosophy and health policy and management, as well as in the Center for Public Health Advocacy. The majority of his scholarship, speaking, and writing for the public falls into one of two research programs. The first involves the ethical and policy issues raised by pain, pain, medicine, drugs, addiction and North America's drug overdose crisis. On this subject he has published in the bioethics, medical, and public health literature, as well as for the popular press including a TED talk, and his first book, In Pain. The second research program involves the overarching theme of catastrophe. In particular, Travis is concerned with how to engage in ethical reasoning about our own individual lives in a time dominated by massive structural threats that are too big and too complex for any one of us to meaningfully address on our own. This has led to publications on climate change, pandemics, food ethics, and overpopulation, and is the subject of his just released second book, Catastrophe Ethics.
Nandita Bajaj 3:30
Well, welcome to our podcast, Travis. It's great to have you here.
Travis Rieder 3:34
Thanks for having me. I really appreciate it.
Nandita Bajaj 3:37
And this is not your first time on the show. The last time you were on the podcast was with then- host Dave Gardner. And you spoke about your book Toward a Small Family Ethic: How Overpopulation and Climate Change Are Affecting the Morality of Procreation. Your thinking in that book has certainly shaped how we've thought about the moral reasoning behind the decision to have fewer children or none at all. And we're excited to have you back to discuss your latest book, Catastrophe Ethics: How to Choose Well in a World of Tough Choices, a great name for a book. And in your book, you talk about how we can live morally decent lives in a world of confusing and often terrifying, global problems. We're excited to dig into that with you today. Welcome.
Travis Rieder 4:24
Well, thank you again. I should say that because you all actually read that book for, however many years ago, you might be some of the only people on the planet who can actually see that the seeds of this book were planted, you know, almost 10 years ago when I was working on this procreative ethics piece. And I was trying to figure out, hey, overpopulation is a problem. But what does that mean for me as an individual, given that it's a massive structural problem. And I was struggling with it in that book and it turns out, I'm still struggling with it, but trying to make progress.
Nandita Bajaj 4:56
Yeah, something as consequential as the decision to have a child requires a degree of scrutiny. And though we can never have absolute decision-making autonomy in this regard, it's nice to have a lot of justification and reasoning that you offer. So we're really happy that you, with the stature that you have in academia, that you're having these really important conversations. And we certainly have a whole section of our episode dedicated to that question, which is something that you end the book with. But we'll start at the beginning of the book. You note at the outset that modern life is both morally exhausting and morally puzzling. Can you elaborate on that? We totally agree with you, but we'd love to hear your reasons.
Travis Rieder 5:46
Yeah, absolutely. I mean, it's morally exhausting, because it seems like in our current moment, everything we do seems to matter. Because all of these tiny actions that I take every day, you know, I talk in the book about like, choosing what milk to put on my cereal. And on the one hand, if you take that out of context, you sound like, dude, just get over it and pour some milk on your cereal. But I'm like, I want to know what the right thing to do is. And you know, I grew up drinking cow's milk. And it turns out cow's milk is very bad for the environment, as are all animal source foods. And so then switched and used almond milk, because I liked it better than soy milk. And it turns out, almonds take a enormous amount of water to grow. And they tend to be grown in California, which is drought-prone. And so now there's good reason not to drink almond milk, right? And so, on the one hand, I want to say there's a right answer here, surely, right, which milk is the best milk to use? On the other hand, people who hear that might say, Get over yourself. This is a little bit precious, right, worrying about such a tiny, tiny thing. But that sort of question replicates all over the place, because we're consumers and we're emitters. And we are participating in all of these massive structures and all different ways through the individual choices that we make. So yes, exhausting, because it's happening every day all day. And confusing, because it's incredibly difficult to sort of identify all of the relevant moral considerations, and think about how much weight you should give them.
Nandita Bajaj 7:15
Yeah, throughout the entire book you talk about, like, a lot of these challenges seem insurmountable. I think like you, a lot of us who are driven or motivated by an ethic, want to find some kind of reasoning to get as close as possible to doing the right thing; while understanding that in such intricate, interconnected, massive structural systems, it can feel extremely overwhelming. And that's what we've appreciated the most is you talk about grappling. Grappling is a very good thing in these times, because it's very easy to simply just also give up and not even try.
Travis Rieder 8:01
Yeah, I mean, you'll notice grappling is the right way to put it, because the book doesn't end with a sort of clear pronunciation of how to live your life. I mean, spoiler alert, right. But if you're thinking to read this book, because it'll just solve all your problems with a single clear direction, you will be disappointed, because that's just not the world we live in. And so a lot of the value here does seem to be in the grappling and the language I tend to use throughout the book is trying to figure a justifiable life, rather than the right way to live, figure out something that we can defend that we can justify to one another. And that process of deliberating and justifying is valuable. It's valuable in its own right.
Alan Ware 8:41
And as you make clear with that almond milk example, the scale and complexity of the decisions we have to make, and the scale and complexity of the culture we live in is something our ancestors, our culture, nothing really prepares us for the complexity of modern life. So a lot of us, will accept the conclusions, the moral reasoning that we've been given. And you make a strong case that we all need to become better at moral reasoning despite this complexity and not succumb to this, what's called moral dumbfounding, where we just stick to our moral conclusions without reasons or evidence. What do you think is creating this moral dumbfounding? And what are the effects on society as a whole?
Travis Rieder 9:24
Yeah, I mean, this is such an interesting question. This comes up early in the book and it's sort of just stage setting, but you could probably tell how much I thought about this issue. So I talk with my students in almost all of my university classes about moral dumbfounding. And part of the reason that I do is because I think that if we think very carefully about the world in which we live and the moral situation in which we find ourselves, we realize fairly quickly, that the moral tools that we humans have evolved to deal with our moral reality just aren't very well equipped, right. And so humans evolved in very small numbers. You know, over the course of hundreds of thousands of years, we went from groups of tens, or dozens to small hundreds. And so the moral tools that we have make a lot of sense. And by moral tools here, I just mean the sorts of common sense concepts that a lot of us carry around. We talk about things like harm and rights and autonomy and dignity. And this language shows up in different languages in slightly different ways. But it's around a lot. And it makes the most sense if you think that morality is really just applying to the number of people who are directly affected by your actions in your immediate vicinity. And what that means is that human morality didn't develop for a world of 8 billion people spread all over the planet by instantaneously connecting, you know, in 100 different ways. The world is genuinely novel. So what I did in the book is I joined a couple of sort of empirical hypotheses. And so one is that general point that the mechanisms that produce moral judgments are not immune from evolutionary pressures, right? They evolved in our brains, and so they were subject to natural processes. And so they were probably constructing moral norms and rules and that sort of thing for a world that is not the one that we live in. So that's sort of empirical hypothesis number one. And then the language of moral dumbfounding comes up from another empirical literature, which is this very fascinating world psychology considered empirical moral psychology. So some philosophers do moral psychology sort of from the armchair. But empirical moral psychologists are very often doing work in psych labs, in neuroscience labs. They're putting people into fMRI machines and asking them moral questions and seeing how their brains light up and that sort of thing. So these empirical moral psychologists have come up with this term moral dumbfounding, to identify a pretty common moral failure, I think it's the only way to put it, which is that we humans believe, very often, that we are reasoning to a moral judgment. And when challenged on that moral judgment, we kind of make up, we confabulate the reasons for which, you know, we believe the thing that we have claimed, and stick to our guns even after that argument has been dismantled. Right. So this has been shown to be a pretty robust feature of human moral psychology. And it's profoundly depressing. On the one hand, it means that by and large, people don't reason very well. We think we're reasoning to a conclusion. And what we're doing is what the moral psychologist, Jonathan Haidt, calls post-hoc rationalizing. We're jumping to a moral conclusion, and then backfilling our answer, right. And that's really depressing. But there's one bit of silver lining in his study, which is that he says - kind of almost as an aside in the paper where he's really dissecting this data - he says, there's one group of people who are sort of immune, to a degree, to this moral dumbfounding, and that is trained philosophers. And he almost uses this as like a jab at philosophers. He's like, you know, trained philosophers like Peter Singer and Derek Parfit and Socrates - those weirdos who like follow reason, even to the really disturbing conclusions, right. But I thought that was a very funny thing to sort of put as an aside, because it sounds to me, like a really strong case for training and proper moral deliberation. Right? So, boy, what a long answer. But if it turns out that humans are, like, reliably broken in this way, that we claim to be morally reasoning, and instead, we're just jumping to a conclusion and then post hoc rationalizing. And yet, that there's a kind of reasoning training that we can do to get better at that particular thing. Boy, that sounds like a good argument for doing the moral reasoning training, right. So that's sort of like the argument for doing the moral philosophy that I do in the book, right. Let's step back and go on this journey with me - so that we can combat moral dumbfounding.
Alan Ware 14:06
Especially in this time of greater polarization in a lot of societies and the need to belong, the social pressures that I think you talk about in the book, leads people to just say, Well, I go with that group and I sign on to their arguments. I sign onto their reasoning. They kind of absolve themselves of any individual moral reasoning responsibility. And that's a very dangerous place for society as a whole to get into, right, when we just offload our reasoning on to thought- stopping slogans and groupthink of all kinds.
Travis Rieder 14:43
Being of the world around us is what you should expect if people by and large are bad at moral reasoning.
Nandita Bajaj 14:51
Yeah, I mean, you talk about also different climate trajectories that we could end up with - the green trajectory I think you said, the middle of the ground, or the one where authoritarianism has won out. And in this kind of world where grappling seems really difficult to do, and as Alan just pointed out, it seems a lot easier to just turn to an authoritarian figure and just be told what to do instead of actually exercising that, and it can, yeah like you say, can feel very exhausting and confusing. But in that world where authoritarianism wins over, we end up in a really catastrophic situation where people become even more and more tribal and other-regarding acts become more and more sidelined. It becomes much more like narcissistic, or nationalistic, as we certainly are, are seeing.
Travis Rieder 15:48
Yeah, you pulled out exactly like my sneaky motivation for the book, right, which is, on the one hand, I want people to be better at moral reasoning, like I'm a philosopher. In this one way, I'm very idealistic, right. Like clarity and reasoning is like one of the great goods that we should promote for everyone. But also, it's this very concrete method for combating one of the catastrophes that I'm talking about, which is the ravages of environmental destruction. And so you know, to be clear, there are lots of catastrophes that sort of raised the puzzle I'm investigating in the book, including infectious disease outbreaks and whatever. But like climate change is the paradigm case. And yeah, so what you're talking about are these climate models that show just how much trouble we're in depends a lot on what humans do in the very near future, like now, and in the next 5, 10, 20 years. And on the one hand, you hear climate policy folks that want to sell optimistic messages saying, we still have a chance, right, you know we have a chance to avoid dangerous warming. And that's getting real close to being a lie. Like, it's not strictly speaking a lie. I think the latest emissions gap report said there's a 0.1% chance that we you know, avoid warming past 1.5 degrees Celsius. So like, you're telling me there's a chance, right, but by and large, we're just not going to avoid dangerous warming. So these scenarios that climate modelers use help us to see just how bad things are gonna get. And yeah, the regional rivalry case, I think is pretty terrifying because, I don't have the numbers right in front of me, but I think under the regional rivalry scenario, in which we expect the sort of move towards authoritarianism, sort of strongmen leaders retrench at nationalism and regionalism. What this does is it means that we fail to solve collective action problems. And climate change and environmental degradation is like the paradigm collective action problem. And so on that scenario, I think the projection was, we should expect warming of 3.6 degrees Celsius by the year 2100. And I mean, it's unbelievably catastrophic. Like 3.6 degrees is well over double the sort of line in the sand that we should have drawn. And the kinds of harms that are happening at that point are hundreds and hundreds of millions dead, probably billions dead, massive migration and displacement from all around the world, big swaths of the planet largely uninhabitable for months of the year, like just the stuff of nightmares. And all it takes for that to become a reality is the sort of breakdown in collective action that would happen with regional and nationalist entrenchment. So yeah, you are exactly right, like part of what I want here is better dialogue, right, people being able to reason together more clearly.
Nandita Bajaj 18:37
Yeah, and you also talk about different kind of moral reasoning traps that people can fall into, and one type that attempts to tell everyone what to do, and then the other one refuses to tell anyone what to do. And they're both dangerous for some of the reasons you've kind of alluded to, but what are the specific dangers to each of these moral reasoning traps?
Travis Rieder 19:02
Yeah, so part of the goal here is to clear the way from seductive, easy answers that don't actually solve the problem. And so if you want to do careful ethical reasoning with everyone, you need them to see the need for careful ethical reasoning. And there are two, I say pretty identifiable, groups of the public who don't think they need careful ethical reasoning, not because they don't care about being good, but because of these two traps. And so the first one is a sort of religious view of ethics. So it's not to say that religion itself is a problem, but the tie between a religious view and ethical constraints, and so on that view, religion tells us what to do. Right? And so trap number one is the authority of God or Gods basically. And that's a real problem because if you think that the authority of your where God tells you what to do, we can't actually have a productive conversation together, because you're going to claim exclusive access to the relevant information, which is, you know, maybe through great books or something like that. But it's through this very particular source. And the other trap is sort of the opposite inclination. And so as you put it, divine authority or divine command, wants to say very clearly to everyone what it is they ought to do. And the other inclination is relativism, which says, there is no one thing that everyone ought to do, because what's right is just your opinion, man. So the cultural version of relativism is, colloquially, what's right is what we do around here. And so according to some culture, there are norms or rules, and the right answer is the thing that you all do. So yeah, so I argue in the book that you have to reject both of those views, in order to see the need for careful reasoning together. And fortunately, on that view, I think they're both pretty easy to reject. And so sort of dispatched with them as quickly as I think is responsible.
Nandita Bajaj 21:07
Yeah, talking about the moral relativism argument, which I think is much more common within our circles, just the type of listener who would be attracted to a podcast like ours, you know, pluralism and post-modernity. It's kind of rampant even within population discussions or procreative ethics discussions. What it also does, it's there's like a heightened tolerance for things we should be horrified by, collectively. And you know, even though as you even speak about, rights are a human construct. But we have collectively arrived at a set of rights for humanity, even though they may be anthropocentric. But we've arrived at rights that maximize human thriving, and wellbeing even looking at you know, like the Universal Declaration of Human Rights. So the problem with the relativism argument is also we sometimes turn a blind eye to practices of patriarchy, or of course reproduction or marital rape, religious barriers to abortion and contraception, because we say, well, you know, that's part of a certain culture or community, even within our own countries, and within you know, our own kind of cultures, just hedonistic, excessive consumption, that's been totally normalized. That's been kind of structurally allowed to be exported globally as an ideal to, you know, aspire to. So yeah, I found the rejection reasoning behind those to be really helpful.
Travis Rieder 22:44
Yeah, I mean, I have to do this with my students quite a lot. I teach now at Johns Hopkins, before that I taught at Georgetown. These are disproportionately liberal, cosmopolitan, very well- educated students who are at elite institutions. And the trajectory for that group is to become more and more relativist. And what I often tell them is, you were taught cosmopolitanism. You were taught tolerance, but you overshot those values and landed on relativism. And to see that there's a distinction between these concepts is really easy. You just have to think about it for a second. Because if relativism is true, intolerance is not a value. It's a value if your culture says it's a value, right. And so if you really embrace tolerance, you actually reject relativism. If you think tolerance is a universal value, then you are not a relativist. Now, what I do think is interesting is that in a lot of these hard conversations that you brought up about the practices that we tolerate, or the respect or space that we give to practices that we personally condemn, is perfectly consistent to hold that what some population does is morally wrong. And that there would be something deeply problematic about my interfering with it in some way, right? Because I don't have the right to interfere with everything that is wrong in the world. And so people have a hard time sort of holding all of these thoughts in their head at the same time. And so I should be clear that I do think there are some consistent relativists. But I also think it's not most people who are honest. And those people are a little bit disconcerting, because here's what the relativist actually believes. A slaveholding population that endorses slavery, for that population, it is morally permissible to own other human beings. And it's not just us saying it's morally permissible for them. It is in the context of a slaveholding society, it is permissible to have slaves. So here's my claim - at the bottom at our very foundations of our view of what we're trying to do with building a moral philosophy - they're just wrong about that. It cannot be true that ethics is a thing and that slavery is permissible. If ethics is a thing that matters and is worth talking about, slavery is unethical. And as long as we can agree on that, then we sort of let the camel's nose under the tent, right? Like, there is at least one universal moral truth, which means relativism is false. And now we just have a conversation about sort of, how big is the space of those universal moral truths? Is it just this really radical stuff like slavery? Why would that be the case? Like, as long as there are moral facts, there are probably lots of really nuanced detailed moral facts.
Alan Ware 25:37
And yeah, as you've talked about the individualism versus the group based, there's definitely been a change in culture over the especially the past 200 years, from more tradition, religion to more individualistic secular, which allows more moral reasoning ground, right, like Socrates was not really popular among the power elite of his time. So often, philosophers questioning, they need probably to be more on that relativistic side as opposed to a purely dogmatic, traditional religious side to have more open ground, right, available to them.
Travis Rieder 26:15
They need to at least be open to more difficult conversations than others likely are. Right? So yeah, Socrates was just a deeply, deeply weird man. Like, if what has survived about Socrates is even kind of true, then he was a big ol weirdo. And that's part of what allowed him to do a bunch of the work that he did, right? Because he was willing to antagonize really powerful people, and have conversations that nobody else wanted to have. And to think really critically, and really deeply about stuff that's important and intimate and hard to talk about. So philosophers very often have much more patience for the sort of, like hard conversations. To be clear, I think this can be taken too far. Right? So the , well actually bro, is like annoying for good reason, right? Like, when we say slavery is wrong, saying, Well actually... it's just not actually helpful, right? That's not philosophical. That's just being obnoxious and offensive, right? So yeah, you can certainly take it too far. But there does need to be space for an uncomfortable conversations. So in this context, with the stuff that you all do, procreative ethics is as uncomfortable as it gets, like this is deeply personal, uncomfortable. Nobody sort of antecedently wants to ask like whether moral restrictions are appropriate in this space. And so the willingness to have that conversation is the sort of deeply philosophical and the right way to ask a hard question, and then try to do it in a way that's fair and respectful and thoughtful and moves the conversation forward.
Alan Ware 27:57
And related to history, throughout the history of philosophy, there have been two major schools that you discuss that have a long history, the consequentialist and the deontologist. Can you give us an overview of those schools of thought and some examples of the moral reasoning process?
Travis Rieder 28:15
Absolutely. I mean, in a way that is so simplistic, it would make my colleagues just absolutely furious. But honestly, I think this pretty much tracks. So I said, divine authority and relativism are sort of two traps of the populace. They're very seductive and easy in ways that are attractive to like millions of people. So this other moral philosophy piece that we're talking about now, I think, is actually still yet another trap, in sort of coming up with a seductive and easy answer to difficult moral problems. But it's a trap that's much more restricted. It's a trap for people who are trained in moral philosophy. And so some people go into ethics, and they study moral philosophy, because they sort of want to know the answer. Like, can I figure out what moral philosophy is right so then I know the answer to everything. And so this idea that you can just find out the one true theory, you know, the capital T truth, so that we can then go around and just tell everyone what's right and wrong, I think is deeply seductive as well. Okay, so deontologist, and consequentialist, or utilitarian, which are a version of consequentialists. These are two big camps of philosophers who for hundreds of years now have sort of been at each other's throats conceptually, and the best way to understand them is that consequentialist is the higher branch is the more abstract branch and the consequentialist just believes that what makes an action right or wrong is entirely the consequences of that act. So that that's all consequentialism is. We tend to talk about utilitarianism as a form of consequentialism, because consequentialism at that level of abstraction isn't helpful. It's like well, which consequences are the right ones? What should we do to promote them etcetera. So utilitarianism gives us a specific version that says the right action is the one that maximizes happiness and minimizes unhappiness. So all you do for all of your moral life is for any single action figure out how do you maximize the ratio of happiness to unhappiness? And that's the thing that you do. Okay, so deontologists are the umbrella of philosophers who basically say that's not true. There is something other than the consequences of an act that matter for the rightness and wrongness of that act. And then what they think that other thing is determines sort of what flavor of deontologists they are. And so deontologists tend to talk about things like rights and dignity and autonomy. And the way that you can think about these considerations are, they block the sort of mathematical move of the consequentialist or utilitarian? So the utilitarian says, look, the best way to make the most people happy is if a surgeon finds herself with a patient who happens to be an organ donor for a bunch of really sick people in the hospital. And so if this patient on her table happens to flatline, she could actually disperse his organs and save five other lives. The utilitarian should actually find this a little uncomfortable, because given all the right information, like set up the case in the correct way, no one ever finds out, etcetera, etcetera, the utilitarian thing to do is for the surgeon to accidentally have a slip of the knife, cut an artery, bleed out and then distribute the organs. And so the deontologist says, the reason that doesn't work is because patients, individuals, humans, persons, have rights. I have a right not to have my artery slit and to be treated as a means to the end of saving other people with my organs, right? So basically, consequentialist, or utilitarians versus deontologists, is the question of does something other than the consequences of my action matter? And if so, what are they?
Nandita Bajaj 32:04
Right, and you talk about how with the consequentialists, because the focus is so entirely on the consequences and not the action, often there can be an allowable degree of dissonance between personal value, personal integrity and how you act in the world versus the thing that you're doing that would seemingly maximize happiness? And I mean, the question about how you even define happiness, in terms of units, is like a much larger, broader question. Because you also talked about in the book, it's not just about units of feeling good versus not feeling good. There's a lot of other moral values to consider within these consequentialist decisions. So that example is very helpful. And of course, there's been an entire offshoot of consequentialism, that's really taken off over the last decade or two is the effective altruism movement and something that we find terrifying. Because it's become quite popular, to the degree that it can allow you to measure certain actions in terms of how to give you know, when it comes to donating money, how to evaluate certain charities so that you know what you're giving is maximizing the benefit to the charity. In those cases, it makes sense to me to have some kind of mathematical moral reasoning. But when it starts getting applied to human lives and animal lives and current and future beings, it starts getting really psychologically bankrupt. And so what are some of the main claims that you found of effective altruists to be and like, what about that do you find helpful or problematic?
Travis Rieder 33:54
Yeah,I mean, I've thought a lot about effective altruism. And you may or may not know that their newest sort of offshoot is called longtermism. And so it gets, in some ways, even weirder, because the idea is that if you really care about maximizing welfare, then what you should think about is the fact that like, there are only 8 billion people alive right now, and in the whole history of humankind there's only been a handful of billions of people. But if we can make it through this, like nascent human phase, there could be quadrillions or quintillions of people, you know, spread across the universe, in servers, digital people and all this. And so like, the happiness of those people massively swamps the happiness of us right now, because they're this piddly 8 billion people now. So there's a lot of distressing math that can be done if you take this sort of longtermist idea seriously. Now, one response to your question, I'm not even as kind to effective altruism as you are. You're like, oh, when evaluating charities, maybe the math makes sense. I actually don't even think it makes sense there. So part of the reason is, I have this view and sort of, you know, talk about I mentioned earlier, sort of like bedrock moral commitments. You know, slavery is wrong - bedrock moral commitment. You build your theory off of that. So here's like a fairly bedrock moral commitment of me in this weird space of like trying to figure out how to navigate our complicated world. If it's necessary, morally, for somebody to do something, then it has to be permissible for some of us to do that thing. And so that sounds really abstract. So let me say, more concretely, I think a world without the arts would be a devastating loss. But the arts require supports. And so in our current world, right, people have to give money to support the arts. People have to invest in the arts, etcetera. Now, the effective altruists are likely to say, I think if they're consistent at all, that the arts are one of the worst places to put your money, right. If you're running the math, your money should way earlier go to almost any other thing. And so that seems to me deeply wrong, because it has to be the case that somebody can permissibly support the arts, if we want a world with arts. And so basically, a big part of my sort of moral philosophy sympathies can be summed up in, boy, we've got to stop pretending this is so easy, right? Like, you can just figure out one thing, maximize happiness, and then go around and apply it to everything else. Because it turns out happiness really matters. So here's where the effective altruists got it right. It's really crucial that more of us start thinking about people who are starving halfway around the world, if we could help them not starve, because starving is really, really bad. And a bunch of us who are incredibly privileged don't think about starvation, because we don't see it. So here's a very serious benefit conveyed to us by the effective altruists. Most of us can do a lot better with our money by not giving it to our very rich alma mater's and the things that happen to pop up in front of our faces, and instead doing some more research and like trying to promote more good. So with my own charitable giving, I'd say at least 50%, maybe more like 70% are in what the effective altruists would call like very high impact charities. And it's not because I'm an effective altruist. It's because I think happiness, or how well someone's life goes, is a hugely important moral value. But I also do give to my alma mater. And it is not you know, Princeton, or something with a billion dollar endowment. I don't know how much Princeton's endowment is, but it's the little liberal arts college I went to, and my wife and I met there, and our careers were launched there. And we met the most important mentors of our lives there. And we think it would be a great loss if that tiny little liberal arts college ceased to exist, which means some people need to keep supporting it, which means we can be some of those people who keep supporting it, right. Not the best use of our money, in the sense that our money could do more good elsewhere. But it also feels totally justifiable for us to sort of pay it back and pay it forward, because both of those things are worth doing, right. So I don't have a ton of patience for effective altruists. But I also don't skimp on crediting them. Right? It turns out promoting good outcomes is really important. Thanks for helping us think about that. Stop acting like it's the only thing that's important.
Alan Ware 38:18
Yeah, you make a good case in the book that taking that to the maximum, you would give away everything you have until your suffering is bad or worse than the person who's suffering most on the planet, right? That's where the logic of utilitarianism kind of goes and you make a point that people have the right to enjoy their life, I think is how you say it, we're permitted to enjoy our own lives and to have our own life projects, and to have some self sense and make sense of what our life is about. And that's a good counterpoint to them.
Travis Rieder 38:53
I should say that I steal that like straight up from Bernard Williams. So yeah, 20th century British philosopher Bernard Williams, but he, I think, made incredibly mainstream, this sense that many of us have - that one of the things that is most deeply wrong with utilitarianism is that it doesn't leave you room to be an agent. And so the language of Bernard Williams is that if you just turn into a cog in the machine of the universe, where your only job is to promote happiness, then you don't actually have any integrity. Because what it means to have integrity is to have projects you follow through on. And if you are willing to drop any project or commitment as soon as it becomes suboptimal in terms of happiness production, you don't have that commitment. And so the way I think about it is with my daughter, part of being a parent is disproportionately favoring her to other people, right? And so if the second that became suboptimal, I was willing to stop doing it, then I'm just not a parent. Like I'm a biological parent. I contributed genetic material, but I'm not actually playing the role of dad If I don't treat her disproportionately. So Bernard Williams is sort of like, deep in my philosophical heart, a lot of what I think comes from that.
Alan Ware 40:08
And if you take a lot of the authoritarian utopians to the maximum, where they're trying to justify their utopia by maximum happiness, it often involves a lot of individuals' projects being snuffed out, including a lot of communist authoritarians, getting rid of the family unit. You will not give your children preferential treatment. It's to the state, and to our vision of maximum happiness, which you just have to trust us that we're, we're doing what we can to create maximum happiness for everybody. But meanwhile, there's a corruption that happens with that, of course.
Nandita Bajaj 40:45
And then there's also you know, the not following through on projects, but also the kinds of projects people take on. Because there is that lack of congruence between personal integrity and the consequences of certain actions, you've got all of these tech bros who can justify working in really destructive industries, whether it's on the basis of human, animal, or ecological rights violations, in order to make a ton of money so that they can give all that money to charity. I mean, mind you, it's not totally clear what percentage of money they actually end up donating. But it certainly gives them a psychological trick with which they can relieve themselves of tackling the genuine problems of today. And in terms of longtermism, even believing that the stuff that they're doing is in good faith in order to protect the trillions and trillions of humans that will exist tomorrow, versus the billions of humans and nonhumans that are suffering today and need attention. So we're really glad that you take that view, because it very much lines up with our own work here. Even at Population Balance it's very much about, you know, maximizing well being and happiness, but also about helping people reimagine what happiness means and what autonomy means, in the absence of all of these really destructive structures that are manufacturing a lot of those definitions for us.
Travis Rieder 42:14
Yeah, absolutely. I mean, it's really hard to have this conversation without thinking of Sam Bankman-Fried, right? Like, it's unclear to like what extent we should attribute his rise and fall to effective altruism. But he was a student of effective altruists. And here's what a bunch of people like me who say - it's dangerous and bad to think only about happiness promotion. Here's what a bunch of us have warned forever. That exactly what happened with him is the sort of thing that might happen, if you think that happiness promotion is the only thing that matters, right? You amass a huge amount of resources and wealth, and you're willing to break rules and cut corners if you think you can get away with it, because the ends justify the means quite literally. That's the philosophy, right? So Sam Bankman-Fried went down in flames and destroyed an empire, but a whole bunch of people get away with it. And that's not really so different. He just happened to be like a visible example of the failure of the view.
Nandita Bajaj 43:10
Right. So also in the chapter of your book titled, "Dissolving the Puzzle With New Tools", and the way you have it is dis-solving the puzzle, dissolving? You tell the story of visiting your friends, Liz and Nate, on their Indiana farm. And you mentioned that some of Liz and Nate's reasons for becoming farmers and living a materially simple life are not based on a feeling of duty or obligation, but are based on what you call the softer moral concepts of integrity, solidarity, participation, character, something you were just saying, you know, deontological thinkers would value. How do their beliefs and actions exemplify some of those softer elements of moral reasoning? And how do those elements help us in dissolving the moral puzzle?
Travis Rieder 44:02
Yeah, oh man, I love getting to tell their story so much. You know I was already well into research for the book, when I decided it would be helpful to go and visit them and they're dear, dear friends of mine. And so I called them up and said, Hey, can I come spend a couple days with you on the farm, follow you around, see what life for you is like, and then ask you a bunch of questions like, you know, an annoying philosopher. And, of course, they say, of course, please come stay with us. And so I go stay at Nightfall Farm, which is their little family farm that they took over from Liz's mother. And it was this incredible experience because, one, I got to see like viscerally, I got to feel how hard their lifestyle is. So that you know, they have this tiny regenerative farm. And I think the way I put it in the book is you know, they work sunup to sundown. They've given their bodies and their youth to the land, and they're never going to be rich. They're never going to have time and never gonna have resources to do much beyond this. But for them, it's it's a moral calling. Nate even at one point said, shit, is this a calling? But, you know, the idea is that they both felt pulled by the goodness of the relevant values. And so they use the language of the land a lot. And I think that's because some of their favorite authors use this language. But they want to heal the land. They want to feed people good food. They want to be part of their community in a way that's sustainable. And what was really fascinating about it, so like I said, I had already done a lot of the work for the book when I went up there. But then once I spent a couple days with him, I was like, Oh, this book is kind of actually about them, right? Like they became real central characters in how I framed my thinking on this, because they were sort of living proof that you didn't need to have a theory, a grand theory, like effective altruism. You didn't need to have a sort of grand theory, like I'm a Kantian about rights and so this is what I think everyone has to do, because it's universalizable. Instead, they saw a problem. And the way they put it is Nate is this sort of small scale picture person, and Liz is the sort of big scale picture person, and both sides pulled them to do this. But for Nate, it was, I want to live with integrity. And that means living my values. And my values are like, it's a horrible thing that humans live in this extractive way that we pull more out of the planet than we ever put in. And we're leaving for future generations something that is worse than what we had. And we're just showing no clear path that we're just not going to do that until everything's gone. And so for him, it was very much, I want to live in a way that's not that. And so that's very private, in a way. You know, it's not about a statement. It's not about policy, although he was down in DC last year, he came and visited but he was down for, because its policy, right, he was down there to talk about a farm bill. So that's not to say he doesn't get pulled in that direction too, but for him the pull is very private. And so it's towards what I ended up calling a sort of purity consideration when we think about what we ought to do. One of the things that pulls us in a direction is to think like, I should do it, because it's living out my values, right? It's about not doing harm myself, not contributing to bad myself. And then Liz, I think feels plenty of that. But she also sort of wants their farm to be an example. And so she does a lot of off-farm volunteering, works for not-for-profit organizations, does environmental education. She has a master's degree in this sort of space. And so you can tell she's an educator, because she like walks around the farm and gives me mini-lectures, which is really fun. But she has a really great line and I'm gonna try to get it right - she says living like we do is an act of resistance. And so it's sort of like an example of how the world can be different. And it's really intended for broader consumption. And so I thought that was just two really beautiful ways of seeing the different ideas at play. And then when we sat down the first full night that we were there, and I just peppered them with questions. I had my recorder on. And I said, So do you guys feel like you are required to do this? Like, Must you do this? Are all of us who are not doing this? Like the schlub like me living in the suburban house, like, are we wrong? And they just laughed. They're like, dude, if you can do anything but farm you should do it. Right? Farming is hard. A lot of times, it's awful. But there's this morally good way of life that they want to experience. And so I found that really fascinating that it was clear to them that it wasn't that it was a universalizable rule that everybody should live in this way. And, as a matter of fact, they will tell you, that ship has sailed. Eight billion people can't live like them. Small regenerative farms will not feed 8 billion people, which is part of the problem. But they still feel the pull of a goodness, even that's not a requirement. And that's the space I find really fascinating.
Nandita Bajaj 49:03
That's beautiful. Because it also draws on this division that people feel when they're trying to make a decision about, is it my calling versus is it actually a good thing for the world? Is it actually helping someone else? And for a lot of people, we find that it kind of works its way around in a kind of a full circle. Because people care so much about something bigger than themselves, they begin to work in the space and find it to become their calling. It doesn't need to be something that you know from a young age. And those two things can be you know, married together. They don't need to be mutually exclusive. So I find that to be also a beautiful example both, as you said have something that's very private, but something that's also making a political statement. And, you know, their example also helped reinforce the argument that you make throughout the book that you're trying to defend the individual action. You're trying to say, you know, yes, individual action can sometimes feel infinitesimally small. But there's moral defensibility for still that to be there. And they are a great example of it. One reason I find also, to expand that reasoning is the phenomenon of social contagion. We don't make individual decisions in an autonomous, absolutely rational kind of a way. Most of our individual decisions are highly mediated, highly socialized, highly influenced by our peer group, or what we see in the media or what we're being sold, you know, through advertising. So, to me, that's really another good reason to say individual action matters, especially in cases like this, because it's not a coincidence that when one family starts to renovate their house on the street, suddenly, you know, a year down, all the houses have been turned into mansions. Or families in India, like India is only 2% Catholic, but Christmas in Black Friday shopping has become really popular there. Well, those aren't individual rational decisions that people are making. That's something we've exported as a value that people should care about materialism. So I really do find that examples like this do have a big impact on spreading, instead of spreading, hedonistic consumption, we should be spreading altruism of this type.
Travis Rieder 51:36
I thought about this the whole time I was writing the book. There's almost a sort of contradiction in the action of writing about these private decisions. Because Nate and Liz never intended to be the subject of a book. Right? And so Nightfall Farm was never supposed to be an example for sort of wide consumption. Now, I'm in moral philosophy, and I wrote a book that's largely moral philosophy. And so chances are, it will not be read by that many people, but it could, right. And so there's a possible future in which this book is read by a very, very, very many people in lots of different countries. And now Nate and Liz's private little action, that they did not intend for broader public consumption, actually, is part of an example of a story that has sort of broader public impact as people read it and think about their own decisions. That is a sort of cool example of the way in which all of the reasons on which we base our actions, they all sort of get tangled up together. And so, yes, Nate largely feels the pull of a sort of integrity, like he should live according to his values, and to do otherwise would be sort of indefensible bullshit, right. But by virtue of doing that, if he builds a life with Liz that is so attractive that it then gets consumed by thousands of other people, and they changed their life as a result, this tiny little commitment to integrity ends up being a sort of small p policy lever, right, like an ability to influence lots of people at the same time, which I think is very cool.
Alan Ware 53:16
Yeah. And you were on this podcast in 2016, with then-host Dave Gardner where you discuss with Dave your book Toward a Small Family Ethic: How Overpopulation and Climate Change are Affecting the Morality of Procreation. And that is sort of these, well, the small decisions that can be socially contagious if people ethically consider their decisions. And the reasons for those decisions are then disseminated, in one way or another through family networks, or, or more formally. So in your new book, you talk about everyday mundane ethics, such as choosing what to eat, what transportation to choose, all the things we've been talking about, how difficult those are - versus the decision to have a child, which you categorize in the book is an example of monumental ethics. So why do you put that decision in that special category of monumental ethics?
Travis Rieder 54:09
Yeah, so I was trying to capture this distinction between the sort of everyday decisions that it would be really helpful to just have a rule for, because a big part of the value of a rule is set it and forget it, right. So there are lots of reasons to eat in various ways. So I already mentioned at the beginning, the sort of environmental impact on food, right? And so lots of sort of private purity focused reasons not to be part of environmental degradation. And so to eat low on the food chain, to be vegan, to be vegetarian, to eat more local, so that doesn't have to ship this far, right. So there are lots of reasons to have heuristics related to your diet. Set it, forget it. You're going to eat several times a day. Just follow your rule. But then there are these decisions that you make once or a few times in your life, and they affect the very basic structure of your life. They change everything about it. And they have all sorts of tendrils that you can't even track out, right. So I decided my daughter's 10. So 11 plus years ago, with my partner, we decided to make a new human. And that's about as big as you can get. Right? So, on the one hand, life would never be the same again becasue we'd be parents. More specifically, like not only would it change the way we see things, but it changed the way we value things. Like, you know, I already mentioned, part of being a parent is sort of placing your kid above everybody else. There's a great benefit to that, because that's a special part of the parenting relationship. But it's not cost-free, right? By virtue of becoming a parent, I'm spending more resources on this one human than I could be spreading out to others, right. So that one decision to create a child is going through my life. So I quote in the book, I forget the most recent statistics, but I think for the average American, raising a child, not including college, so raising a student up to the age of 18, is like $200,000, American, and then if you're in the higher income bracket is going to be more like $300,000. Right. So investing a huge amount of financial resources, and then not to mention, like the time and all of your energy, etcetera. So it changes everything about your life. But it's also the most impactful thing that you can do along a whole bunch of dimensions. It's so relevant to the stuff that we've been talking about, and relevant to the book. Environmentally, the most impactful thing that you can do for most of us, you know, politicians and billionaires and people who can like make decisions that change stuff, they're different, their actions are more like policies. But us normal folks, by and large, if you make a new person who can then go on to make new people, but also is going to have a lifetime with their own actions and their own choices, that is the most impactful thing you can do. Because not only am I making choices that are more environmentally unfriendly now, because every time I fly home overseas, I have to buy another ticket for her, right? So like, there are these sort of one-off things. But she's also like, going to be her own agent. And she's going to live an entire lifetime, where she has an impact on the earth, right. So all of that stuff together is why I take this to be sort of monumental ethics. It's, it's not affecting tiny little things that happen all the time, so where you just want to have a rule to set it and forget it. Instead, you're not going to make this decision very often, maybe only once in your life, maybe only a few times in your life. And it's going to change everything. It's not just everything kind of pragmatically. It's gonna change everything morally too - about what you value, how you act, and how you think you should act.
Nandita Bajaj 57:32
Right. And to your point, you know, I think using your example is quite brilliant, because you with your partner put a lot of thought into the decision. And we've appreciated that, you know, there's someone like you kind of within academia, who is able to offer the moral scrutiny that's needed, especially for people within our societies who live with a relative degree of privilege and reproductive autonomy - something that's not available to so many millions, you know, even billions of people in the world, to really take that action very seriously, to really consider the ethics of that.
Travis Rieder 58:13
I can just say one thing, sort of, in summary about the chapter on procreative ethics and in how this work has evolved over the 2016 book and my scholarship to this. You mentioned very rightly, that the fact that it was a choice for us is sort of an important precursor, right. We got to deliberate about whether to have a child. And that's a really crucial step not to miss. So sort of moral requirement number one for people and for societies is that if you want to have good, careful moral deliberation about some of the most important stuff in the world, like whether to have kids. Everyone has to have the right to have that conversation. So there has to be genuine control over reproduction, right. So then if everybody gets to do the deliberation, now the point is that the deliberation isn't defensive. It isn't sort of, Who are you to tell me what to do, you know, with my family, because I'm nobody and I'm not telling you at all. The goal instead is for each of us to have the tools for us to deliberate for ourselves. I use the language in the book of, I'm looking for an ethic of conscientiousness, right? I'm not aiming at purity, and I'm trying to avoid nihilism. But the goal is an ethic of conscientiousness. And that requires the time and the space and the cultural openness to actually have this conversation with one another and for ourselves, and the willingness and ability to deliberate carefully, then yeah, hopefully, that would go a long way towards having a much more sustainable population where we treat everybody well. What a dream that would be, right. So that is very, very nobly the goal.
Nandita Bajaj 59:55
Yeah. And we're grateful to you for providing the tools for people who are able to engage in this kind of moral deliberation from an ethics and rights-based perspective, especially around as you said, procreative decision making - talking about overpopulation, talking about the different inequitable ways in which we are impacting one another and the planet. So we're really very happy that you could talk with us. All the best with your book. And thanks again for joining us.
Alan Ware 1:01:26
Yeah, thanks for being with us all these years on this journey.
Travis Rieder 1:00:30
Thanks for having me. It's been an absolute pleasure. Appreciate it both of you.
Alan Ware 1:00:34
That's all for this edition of the Overpopulation Podcast,. Visit population balance.org to learn more. To share feedback or guest recommendations write to us using the contact form on our site, or by emailing us at podcast at populationbalance.org. If you enjoyed this podcast, please rate us on your favorite podcast platform and share it widely. We couldn't do this work without the support of listeners like you and hope that you'll consider a one-time or recurring donation.
Nandita Bajaj 1:01:03
Until next time, I'm Nandita Bajaj thanking you for your interest in our work and for your efforts in helping us all shrink toward abundance.