Is AI Just or Unjust?

Tirth Patel
4 min readApr 12, 2021
Is AI Just or Unjust?

Nowadays, AI could facilitate autonomous operations in each and every sectors, but is human able to use those AI technologies in right way? Here, I want to explore more about AI used in LAWS(Lethal Autonomous Military Systems) especially on robots used in the military. Moreover, this article will throw light on the ‘Just’ and ‘Unjust’ algorithms and will reveal two faces of the autonomous technology as ‘Boon’ or ‘Bane’ depending upon the conduct of use and ethics. Moreover, we should insist more on mitigating those risks by setting different sets of permissibility on fighting robots and human military rather than presenting disagreement towards autonomous technologies.

“With proper safeguards, a Just state could use LAWS to enforce a lasting peacekeeping and flourishing post war society. But the Unjust state could use LAWS to enforce its tyranny without possibility of revolt.”

Talking about the algorithms used in such context, I believe that the risk of being wrongly judged by those autonomous algorithms depend on the intervention of humans and Just/Unjust state in such decision making processes. For instance, in “Human in Loop” system for Online Recruitment where all decisions like Resume Selection, Hire/No Hire decision are made by humans, so the chances of wrong judgement is less whereas there are more chance of being wrongly judged in “Human out Loop” system as there is zero human intervention and all such decisions are made by Resume Checker System which rather review the resume based on keywords used in resume. The chances of getting selected increases if the resume contains statements starting from keywords like “Improved” or “Enhanced”.

INTERDEPENDENCE THESIS

The problems here are especially caused due to the interdependence of the fundamentals of the war theory. The idea that the part of Just theory are independent or not?

“Is the justice of war itself is independent of justice of how you fight it or is it not?”

Here, the problem shows that how there can be contrast in views considering the justifications of use of such systems in war. The question arises that in different scenarios, what would be the implied morality of the system that is taken into account for taking certain decisions.

JUST THEORY

“Just” war theory is traditionally divided into 3 parts depending whether or not a Nation State is justified in context of war such as “Jus ad Bellum” which takes acceptable justifications into account for using armed force and publicly declaring war, “Jus in Bello” states the right intentions and conduct of use of ‘Just’ cause during the war. Lastly, “Jus post Bellum” which concerns about following the conducts after the end of war.

What makes an algorithm Just or Unjust is the intention and conduct of use. Specifically, “Just” means it has to have a right intention of use and should follow the conduct of use. For instance “Peacekeeping” is the major part of Just theory whereas “Unjust” is the way which is irrelevant to ethics and does not have moral rightness. For instance misuse of personal data or saved passwords in any application by their developing team is a case of unjust theory. However, at most one side in any context or war could be just; while the other side could be affected with unjust conduct of use.

Unjust algorithms did not discriminate between the people and use the same disproportionate force which affects everybody in an equal way. Consider the case when US dropped nuclear weapons on Hiroshima and Nagasaki where they even targeted & affected innocent people(non-combatants) and soldiers(combatants) fighting in war equally.

There are several drawbacks of automation in decision making systems employed in various sectors. For instance, no one would like to hinder their privacy by sharing their shopping habits or occupation history for calculating credit scores. Digital learning Environments are examples of AI in education which can affect performance & morality of students due to less human involvement & cheating during online exams. Moreover, the system which judges the performance of a Professor on the basis of the overall score of students per year might be irrelevant as some of the students did not perform well in that year, but could have done better than the last year under that professor.

AI ethics is all the way up & down.

But, on compromise of restriction on human freedom and security. To address such problems we could set different rules and levels of permissibility in different contexts and people. For instance, the level of surveillance on innocent people should be different from the one who enriches criminal activities.

By implementing the ‘Just’ states and moral rightness we can ensure that the newly developing AI technologies doesn’t allow giant tyranny to take roots in future. Hence, it is most important to follow conduct of use and ethical design while creating any autonomous technology.

--

--

Tirth Patel

Data Science Graduate Student at University of Southern California