Fairness and Transparency in AI

Tirth Patel
3 min readApr 15, 2021
Can we really trust AI?

When thinking about developing AI technologies, it is vital to not only look at the technical perspectives but it is also important to explore other perspectives while dealing with AI technologies which are susceptible to generate bias. Personally speaking, the way AI is currently implemented in every aspect such as advertisement, online recruitment, financial services, securities and many more can lead to inequalities.

Hence, it gives rise to the practical and larger political critiques, arguing that it’s not just that the technology is faulty, even if it works ubiquitously and accurately, it would infringe dangerously on people’s liberties and personal lives. Due to these consequences, there is a need for implementing more practises for moving towards improved fairness, a greater diversity, transparency and inclusion in the tech sectors of AI research.

Moreover, principles like fairness, ethics and morality in AI revolves around the concept of “who owns the code” i.e technology which is created by one demographic is mostly effective and fair to that one demographic only. Moreover, such tools are more likely to be controlled by the people at superior positions and are always tested on the people at lower levels of control. This gives the rise to the discrimination against the minority sub groups and feel like they are being a part of testing of a system against their will to a certain extent.

Though it is hard to define “Fairness” in the context of AI, we can define what is unfair. Thus, to mitigate unfairness and bias while dealing with AI systems; it is vital to make sure to work with the communities that are susceptible to racial or historical bias and to assure that their voices or opinions are lifted and are included in discussions and development of such AI systems.

Moreover, fairness in machine learning & AI is applied to the whole end-to-end system such as who is developing the system and from collecting the data to the deployment of ML models and their outcomes.

However, fairness and transparency are two different sectors. Transparency exhibits providing documentation and reporting on the publication of data sets as well as that of models that summarize things like training information, evaluation matrix and model design. Apart from this, there is a need to improve the reproducibility and transparency of datasets and to build robust and scalable methods that can be seamlessly integrated into existing and even new AI pipelines to achieve fairness.

Is there a need to increase contextual awareness in AI developers?

From my perspective, there is a need to change how these AI disciplines are taught and implemented. Being a good AI engineer is not only showing how good you are at something technical but it is also about caring about the society.

We can try to determine and mitigate bias or try to be more lucid and fair by giving documentation about the areas that are susceptible to be biased. But for implementing this, we need to be fair; however the real world is complex and so there is one group or another who is always going to be discriminated against.

Furthermore, there is a necessity of including the general public, end-users, ethicists, politicians and engineers in AI research. Secondly, AI researchers need to work closely with street-level bureaucrats who are actually considered authoritative in making decisions and implementing policies so that there is increased awareness of potential consequences of algorithmic bias, robust modeling and fairness in AI.

--

--

Tirth Patel

Data Science Graduate Student at University of Southern California