Institut für Philosophie
Leibniz Universität Hannover
Königsworther Platz 1
Doctoral Candidate at the DFG Research Training Group 2073 Integrating Ethics and Epistemology of Scientific Research, Leibniz University Hanover
2007 – 2009
Master of Science in Neuroscience, Brandeis University, USA
2005 – 2007
Bachelor of Arts in Philosophy, University of Maryland, College Park, USA
(2020) Identifying and Assessing the Drivers of Global Catastrophic Risk: A Review and Proposal for the Global Challenges Foundation. Global Challenges Foundation. (Co-authored with Simon Beard.)
(2020) Can Anti-Natalists Oppose Human Extinction? The Harm-Benefit Asymmetry, Person-Uploading, and Human Enhancement. South African Journal of Philosophy 39(3), 229–245.
(2019) Existential Risks: A Philosophical Analysis. Inquiry: An Interdisciplinary Journal of Philosophy.
(2019) The Possibility and Risks of Artificial General Intelligence. Bulletin of the Atomic Scientists 75(3), 105–108.
(2018) Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History. In Roman Yampolskiy (ed.), Artificial Intelligence Safety and Security. New York, NY: CRC Press. 357–374.
(2018) Who Would Destroy the World? Omnicidal Agents and Related Phenomena. Aggression and Violent Behavior 39, 129–138.
(2017) Moral Bioenhancement and Agential Risks: Good and Bad Outcomes. Bioethics 31, 691–696.
Presentations and Talks
“Three Paradigms in Existential Risk Studies: Where the Field is Today, and How it Got There”. Envision Conference, Princeton University, USA.
“Climate Change, Engineered Pandemics, and Artificial Intelligence: Understanding the Greatest Threats to Civilization this Century” (hosted by Dr. Matthew Rendall), University of Nottingham.
“How Tragic Would Human Extinction Be? Convergent Arguments for Making the Survival of Our Lineage a Global Priority”. Centre for the Study of Existential Risk, University of Cambridge.
“When Did We Realize That We Could Die Out?”. History and Philosophy of Science Department, University of Cambridge.
“Existential Risks at the Center for Human-Compatible AI”. University of California, Berkeley.
“When Will Humans Die Out?” BBC Analysis.
“Agential Risks: A New Topic for Research”. Future of Humanity Institute (FHI), University of Oxford.