Of Trolleys and Tesla: How Should We Theorize about the Ethics of AI? (Jeff Behrends, Harvard)

October 22, 2021 at 4pm

Abstract: Is the ethics of Artificial Intelligence (AI) a novel domain, in need of new approaches to ethical theorizing? Opinions diverge. According to some, the ethics of AI is no more special than the ethics of any other aspect of our lives, and our best developed theories apply straightforwardly. Others view the present and forthcoming capabilities of AI as so radically unlike those of other tools that we build and use that we should expect normative theories to fall short of illuminating their moral dimensions, and be prepared to build new frameworks in their place. I will explain how I approach this methodological question by showing you: An extended look at the application of trolley cases to autonomous vehicle accident scenarios reveals that both enthusiasts and detractors of trolley cases have misunderstood their significance for moral theorizing where machine learning is concerned. My diagnosis of their errors suggests that extant normative tools may very well be directly relevant to theorizing about the ethics of machine learning, but that ethicists are likely to misuse them unless they make efforts to better understand the underlying technology. Along the way I will offer some reasons to think that this suggestion likely generalizes beyond machine learning applications in particular.

Hosted by the William H. Miller III Department of Philosophy