Robot and artificial intelligence. Allies or enemies of human beings? Human dignity is at stake. The words of theologian Emmanuel Agius

At first glace, theology and robotics  appear to have little in common. But they are not two distant universes. Theology must engage in dialogue with robotics, that is shaping the life of present and future generations at an ever faster pace, raising questions and offering leadership and moral guidance. It is the view of Emmanuel Agius, Maltese theologian, member of the European Group on Ethics in Science and New Technologies (EGE). “We should be careful: robotics is divisive.” SIR interviewed him.

“Faith is not a disincarnate reality, and theological reflection must not remain indifferent to progress and to the impact of artificial intelligence (AI) and of robotics on man and on society. In fact it should engage in dialogue with technology and science.” It is the belief of Emmanuel Agius, dean of the Faculty of Theology at the University of Malta, member of the European Group on Ethics in Science and New Technologies (EGE – independent advisory body of the President of the European Commission), that we have met in the Vatican on the sidelines of the workshop “Robo Ethics. Humans, machines and health” held a few days ago on the initiative of the Pontifical Academy for Life.

Robots and AI: are they allies or enemies of human beings?
Both. They can improve human wellbeing and advance human dignity. However, they could denigrate or manipulate that same dignity. Robots promote workers’ dignity when they help decrease or eliminate dangerous, repetitive, humiliating jobs, and when they contribute to increasing their efficiency and humaneness. Conversely, their pervasive use represents a threat to human dignity. For example: when greater efficiency linked to robotization, automation and digitalization leads to replace a considerable number of workers with intelligent machines and these people cannot easily find another job in a complex society such as ours. Robots protect human dignity when they control public spaces to ensure people’s security and protection, or when they are used by the military for defence purposes. Algorithmic monitoring of specific workplaces can increase workers safety; however, pervasive surveillance can pose a threat to dignity and privacy. Self-driving cars can improve quality of life but they can also represent a life-threatening danger as a result of technological or sensor failures, or of hacking.

What is your view of the so-called “carebots” to assist elderly and disabled people?
Robot caregivers can replace human caregivers in fatiguing tasks, and they can provide mechanical help in assisting the elderly and the disabled. A robot capable of stimulating the cognition of a patient affected by dementia, or of carrying out a number of daily activities, can be useful. But a machine cannot replace the human aspect of the care to patients.

“Artificial care” or “artificial empathy” are unacceptable as such.


Some claim that “activity” resembling human action can be performed by more “intelligent” robots…
Some believe that they possess “individuality” marked by intentions,  objectives, emotions as well as a certain degree of self-awareness or conscience, or that they are independent and capable of taking decisions. In reality, anthropology and theological ethics have shed light on the real nature of the human person and of moral action, triggering debates on robotic activity and on the true meaning of intentionality, freedom of will, role of emotions or desires, moral conscience and responsibilities. These are extremely complex dilemmas. The emerging “machine ethics” that aims at providing robots with principles or procedures to solve ethical dilemmas highlights the scope of contemporary technological reductionism, strongly criticized by Benedict XVI in Caritas in Veritate

Thus human activity cannot be equated with robotic activity…
Moral action is an inherent quality of human beings, not of machines. Robotic activity has its origin in the work of designers and programmers, as wells as in learning processes that robots’ cognitive systems have been exposed to. The purpose of their activity has been activated by algorithms and artificial intelligence. For that reason, regardless of its purported intelligence,

a robot cannot be ascribed an intrinsic intentionality. Similarly,  its activity has no moral value.


In the light of the above some people wonder whether a robot’s morality could be “programmed” and whether it could be taught to distinguish between good and evil… Even if an intelligent machine could acquire the skills to handle a situation that does not conform to the rules, In other words, if it could be programmed with decision-making processes or if could be able to calibrate its algorithms in such a way that its behaviour could one day become. Yet most scholars working on the ethics of machines agree that robots are still far from becoming “ethical agents” comparable to human beings.

Who has the moral and legal responsibility of the more independent robots?
This is another delicate question. Can we speak of the shared responsibility of robots, designers, engineers and programmers? None of these agents could be indicated as major source of activity. Moreover, in the debate on the responsibility of robots, being able to trace all robotic actions and omissions is of the essence.

What are the impacts of robotics on justice, solidarity and the common good?
It is necessary to avoid creating new forms of poverty by expanding the existing divide that separates developing and developed Countries.


A robotical divide must not add on to the digital divide

Yet its rapid progress requires skilled workers; society must introduce protection measures for vulnerable population brackets, while the requalification of workers and of society as a whole is equally important.

From the perspective of Christian ethics, is the military use of robotics legitimate?
The fact that weapons have become more precise does not mean that the ability to distinguish between the right and the wrong target has increased. There remains the major problem of the proportionality principle: is the “remote” pilot capable of balancing military advantage with the loss of human lives? As regards autonomous weapons, is it acceptable that machines make life-or-death decisions? No matter how complex a machine may be, is it legitimate to delegate the murder of human beings to a machine?

Thus what is the role of theology in the face of these challenges?
To extend the horizon of meaning, to restore the centrality and the dignity of the human person raising questions to clarify the goals and the means to attain them. As Pope Francis reminds us in Laudato si, only if guided by ethics, by the principle of the dignity and centrality of the human person, by a solid moral conscience, can these innovative technologies improve the life of man and be at the service of what is good. At first glace, theology and robotics  appear to have little in common. But they are not two distant galaxies. Theology must engage in dialogue with robotics that is shaping the life of present and future generations at an ever faster pace, raising questions and offering leadership and moral guidance.