The emergence of technologically sophisticated artificial intelligence (AI) may come to complicate our traditional understandings of who or what is capable of moral action. In this thesis I will provide an exposition of the various ways in which we may conceptualize moral agency, and its relationship to artificial agents (AAs). Specifically, whether the notion of an artificial moral agent (AMA) can be thought of as conceptually coherent. I will conclude, however, that a clear metaphysical line separating artificial (or technological) from human (or subjective) moral agency does not exist. This does not mean that humans are not moral agents, or that technological artifacts are moral agents. Rather, it suggests that our agency has always been coupled with technological artefacts, and that these artifacts are mediators of our agency. In light of this I propose we shift the debate around technological agency into the normative domain.
Following from this normative shift, I investigate question of responsibility. Specifically, whether sufficiently independent AI systems might complicate or moral evaluations of certain situations and actions. Some argue that we will soon be faced with so-called responsibility-gaps: situations in which an AI performs a morally relevant action, but no human being can be held (legitimately) responsible. I will argue that such “gaps” in responsibility are not unique to technology. Our own practice of holding one another responsible has features that are often used in defense of a uniquely technological responsibility gap, and our theories are able to account for these difficulties. For this reason I will argue that the threat of responsibility gaps is rather low.