Intro. [Recording date: March 12, 2025.]
Russ Roberts: Today is March 12th, 2025, and I am joined by author and philosopher Jeff Sebo from New York University [NYU]. Our discussion today revolves around his new book, The Moral Circle: Who Matters, What Matters, and Why.
Jeff, it’s a pleasure to have you on EconTalk.
Jeff Sebo: Thank you for having me.
Russ Roberts: To kick things off, what do you mean by the moral circle?
Jeff Sebo: The moral circle serves as a metaphor for our understanding of the moral community. When we make decisions or policies, we must consider: to whom are we accountable? Who deserves our responsibilities? Typically, we might limit our consideration to certain humans, but many are beginning to acknowledge that we owe responsibilities not only to all humans but also to animals, particularly mammals and birds, who are impacted by our choices. My book challenges readers to ponder whether we should extend our moral considerations even further, and if so, to what extent.
Russ Roberts: You present a thought-provoking scenario involving roommates—Carmen and Dara. Before we delve deeper into their complexities, can you share a bit about them and how they complicate the moral landscape?
Jeff Sebo: Absolutely. This thought experiment draws inspiration from philosopher Dale Jamieson but expands upon it. Imagine living with two roommates, Carmen and Dara. You all get along well, navigating typical roommate dynamics. Then, you decide to take ancestry tests for fun. To your shock, Carmen turns out to be a Neanderthal—thought to be extinct, but in fact, a small population remains. Meanwhile, Dara is revealed to be a Westworld-style robot. This revelation prompts a crucial question: how does your newfound understanding of their identities influence your moral obligations towards them? Do you still feel responsible for their welfare, or do you feel entitled to disregard their interests?
Russ Roberts: I appreciate how you frame this. It’s not just about preferences; it’s a deeper ethical inquiry. We could even consider more controversial aspects of their histories. For instance, how might their familial backgrounds—perhaps involving morally reprehensible figures—alter our moral obligations?
Jeff Sebo: Exactly. It’s fascinating to navigate the spectrum of responses to Carmen and Dara’s identities. Most contemporary ethical theories would argue that we still have moral responsibilities toward Carmen, despite her being from a different species. She shares many cognitive capacities with us, such as consciousness and agency. Thus, her interests still matter. Dara, on the other hand, presents a more complex case. While she appears sentient and agentic, her nature as a product of technology introduces uncertainty about her capacity for subjective experiences like pleasure or pain.
Russ Roberts: Let’s pivot to the Welfare Principle you discuss. How does this principle influence our treatment of Carmen and Dara?
Jeff Sebo: The Welfare Principle posits that if a being has the capacity for welfare, then it possesses moral standing. This means the ability to be benefited or harmed for its own sake. For instance, while I can damage my car, it lacks the capacity for welfare. In the context of Carmen and Dara, we can presume Carmen has moral significance due to her consciousness. As for Dara, the uncertainty surrounding her ability to experience emotions complicates our ethical obligations towards her.
Russ Roberts: A whimsical illustration is from Fawlty Towers, where Basil Fawlty expresses his frustration by hitting his malfunctioning car. While he can damage the car, he cannot harm it in a meaningful moral sense. In contrast, how we treat Carmen and Dara—such as denying them access to basic needs—raises ethical concerns. Do we treat them differently based on their perceived capacity for suffering?
Jeff Sebo: Exactly. The crux of the matter lies in the ethical ambiguity surrounding Dara. Some ethical theories emphasize the necessity of sentience for moral consideration, but we face significant uncertainty regarding whether advanced AI can truly experience subjective states.
Russ Roberts: You reference the concept of “what it’s like to be you,” inspired by Thomas Nagel. Can you elaborate on this in relation to consciousness and sentience?
Jeff Sebo: Certainly. Nagel’s famous paper, “What Is It Like to Be a Bat?” illustrates the concept of phenomenal consciousness—essentially, what it feels like to be a particular being. Our brains process a variety of experiences, some of which correspond to subjective feelings and others that do not. This inquiry leads us to ponder: what is the subjective experience of a radically different being? And which beings possess the capacity for such experiences?
Russ Roberts: This touches on the notion of qualia—our individual perceptions. Philosophers like Harry Frankfurt discuss how we have desires about our desires. Considering AI, could Dara experience regret or longing, akin to human emotions? And if she doesn’t, does that shape our moral responsibilities towards her?
Jeff Sebo: That’s a critical point. While AI may not experience emotions in the same way humans do, the potential for complex emotional states and meta-cognition could suggest a form of moral consideration. However, we must differentiate between moral agency and moral patient-hood. A being can be a moral patient—worthy of moral consideration—without the cognitive complexity of an adult human.
Russ Roberts: Let’s delve deeper into why we might need to care about Dara’s welfare.
Jeff Sebo: There’s no inherent reason why advanced AI systems couldn’t possess cognitive capacities similar to human beings. We are on the brink of creating AI that can perceive, learn, and even engage in social awareness. The question remains: will these systems experience subjective states? This uncertainty complicates our ethical obligations towards them.
Russ Roberts: I remain skeptical about AI achieving true self-awareness. It’s intriguing to consider how we might reassess our views based on their capabilities. But I wonder, does the mere presence of cognitive capacities invoke the Welfare Principle?
Jeff Sebo: That’s a valid concern. While I believe advanced AI could exhibit cognitive functions akin to humans, the critical question remains whether they have subjective experiences. I argue that the essence of moral standing hinges on this subjective experience of welfare.
Russ Roberts: My instinct is that AI, as machines, lack the moral weight that living beings possess. However, this perspective becomes complicated when considering our own nature. If our thoughts are driven by biological processes akin to those in machines, then what distinguishes us from them?
Jeff Sebo: This is where the distinction between functional capacities and subjective experiences becomes significant. While AI may mimic human cognitive functions, the underlying processes differ. We must carefully evaluate whether similarities in function translate to moral significance. It’s crucial to avoid over-generalizing or underestimating the potential for consciousness across various life forms.
Russ Roberts: Let’s explore the foundation of the Welfare Principle. Many assume it’s rooted in religious or moral frameworks. What if someone doesn’t subscribe to these beliefs? Why should they care about treating others well?
Jeff Sebo: This touches upon meta-ethics. When we engage in ethical discussions, we must consider whether we’re debating objective truths or simply expressing subjective preferences. Even if one adheres to a religious viewpoint, the underlying questions remain. For secular perspectives, moral realism posits that objective moral truths exist, while anti-realism suggests values are social constructs. My inclination leans toward the latter.
Russ Roberts: I see your point. Many might agree that a world where kindness prevails is preferable, regardless of religious beliefs. Yet, the challenge lies in justifying moral obligations. Is it enough to advocate for a principle simply because it leads to a better society?
Jeff Sebo: This requires thoughtful exploration through dialogue and reflection. Our values, even as social constructs, can lead us to recognize the importance of compassion and respect for others. Ultimately, ethical norms arise not from external imposition but from our internal reflections on what it means to live an examined and authentic life.
“`