Research

Here’s a summary of my research, including links to published papers. Feel free to email me for drafts!

My research assistant at work.

  • ACT CONTRACTUALISM

    I’m interested in contractualist ethical theory because I find plausible the idea that moral permissibility is about what’s justifiable to others. This is why, as part of my doctoral research, I’m developing and defending a view called Act Contractualism. In a nutshell, it says that an act is permissible iff it cannot be reasonably objected to by those who are affected by it.

    In my paper “On the Possibility of Act Contractualism” (published in AJP), I argue that Act Contractualism is distinct from what I call Rule Contractualism (which most contemporary contractualist views are versions of), and that it better fulfils this contractualist ideal of justifiability to others.

    Of course, the question is then to determine whether Act Contractualism is plausible. To do that, I’m in the process of formulating the view further by offering a theory of reasons for rejection. In particular, an important worry about act-based views is that they’re ill-equipped to deal with cases in which what would happen if everyone behaved a certain way (or if one was expected to perform a given action repeatedly) matters. In a paper in progress, I argue that Act Contractualism is able to address such cases. This is because certain features of Scanlonian Contractualism allow us to accommodate these important moral considerations, and these features are compatible with act-based versions of Contractualism.

    Next, I want to think about whether important aspects of well-known (rule) contractualist views (such as the requirement that reasons for rejection must be general) are compatible with Act Contractualism, or whether they should be rejected by act contractualists. I also want to test Act Contractualism against objections raised against other views, or against Contractualism in general.

    PARTIAL AGGREGATION

    I have a long-standing interest in the clash of intuitions that come from considering how, on the one hand, doing more good for more people seems better than doing good for less people, while, on the other hand, individuals seem to have claims which no amount of aggregate good should be able to outweigh. That is, while we want to say that it’s better to save ten lives than to save one, we also want to say that no amount of curing headaches could possibly justify torturing an individual. The problem is that reconciling these intuitions is surprisingly difficult.

    As a contractualist, I’m sympathetic to Partial Aggregation, which is a research agenda which tries to account for these conflicting intuitions in a systematic, plausible and appealing way. My third project therefore consists in thinking about various problems with partially aggregative views.

    With my friend Milan Mossé (UC Berkeley), we wrote a paper on “How to Count Sore Throats” (published in Analysis). As the title suggests, it’s about Sore Throat cases – that is, cases which encourage us to ask whether, when faced with a choice between rescuing two groups, we should let a minor harm (like a sore throat) break the tie between two much more severe harms (like death). We offer an explanation for the judgement that grounds these cases, and we show that this explanation has implications in cases which seem unrelated to Sore Throat cases.

    We also have another paper on the go, in which we’re trying to integrate our findings about Sore Throat cases into a broader theory of Partial Aggregation.

    MORAL THEORY FOR AI

    Another one of my projects involves exploring the ramifications of the traditional debate between act-based and rule-based theorising for AI ethics. In a paper in progress, I focus on Act Consequentialism and Rule Consequentialism to show that the arguments usually made for or against one of these views function in unexpected ways when it comes to AI. Not only does this call for the development of a distinct field of Moral Theory for AI, but it also suggests that a key factor to consider in ethical decisions involving AI is the way in which humans react to AI. This might call for more empirical research to be done on the topic.