In this episode, I explore robot rights with Mark Coeckelbergh’s paper, “Robot Rights? Towards a Social-Relational Justification of Moral Consideration”

Mark Coeckelbergh is a Philosophy of Media and Technology professor at the University of Vienna and has held many other important positions in the world of philosophy. Moreover, much of his work covers and discusses the ethics of technology. Some of his titles include “Robot Ethics,” “The Political Philosophy of AI,” and “Why AI Undermines Democracy and What To Do About It,” published this year. 

I summarized the main points of one of his older papers tackling robot rights so a general audience could better understand his proposal and critics of other philosophies. 

The first half of his paper discusses philosophies such as Deontology, Utilitarianism, and Virtue ethics. Coeckelbergh describes how these viewpoints characterize rights, morals, and consideration of non-human beings. He takes some aspects of virtue ethics into his proposal but has critics for all three frameworks. He also believes that giving rights the same way humans do would be problematic, but he aims to find a way to allow some ethical consideration for them. 

Then Coeckelbergh claims his own approach on how to do that. He proposed that robots should gain consideration based on our social relationships with them. He believes that this framework would absolve the issues found in the previous philosophies. Western and Eastern social philosophies are analyzed and compared to his ideas so that he can describe his thoughts better without working from the ground up. Segments of the paper add more clarity to his framework, but Coeckelberg admits that he cannot create a fully functioning system within his paper. 

There are more details and interesting points in the paper not covered in the episode, and I highly recommend reading it yourself.

“Robot Rights? Towards a Social-Relational Justification of Moral Consideration” is fully accessible at Springer Link.