top of page

Prioritising Transparency In AI Ethics

~Dhruv Somayajula*

 

Abstract


This article discusses the importance of prioritizing transparency as a means to achieving the goals of ethical AI, commonly summarised in ‘responsible AI’ principles such as reliability, inclusivity, privacy, security and accountability. The article discusses recent developments in the EU through the OQ v Hesse case which connected the right to meaningful information on the workings of automated systems with the need for transparency. The article then discusses parallel legal mechanisms adopted by other countries with a similar approach, and highlights the need for prioritizing transparency to achieve the goals of ethical AI. Lastly, the article discusses what meaningful transparency must aim to achieve, and sets out recommendations to help achieve this transparency using XAI techniques or ex-ante standards for high-risk AI systems.


Introduction


The rapid advances seen in artificial intelligence (AI) systems in recent years has grabbed widespread social attention. Today, AI-based systems are seen as the next civilizational leap in technology, capable of performing diverse functions previously uniquely performed by humans. In this context, the conversation around the AI system lifecycle and its impact on humans is an evolving discussion. AI systems come in all forms and for various purposes- be it automated decision-making (ADM), generative, recommendatory, cognitive, analytical, predictive and so on. Each type of AI system raises unique concerns. For example, the use of facial recognition technology (FRT) raises issues of privacy and bias, while the use of generative AI challenges existing notions of misinformation and copyright laws. Over the years, we have identified some basic thematic concerns that bleed into most AI systems due to their importance to core human rights. The design, deployment and outcomes of AI systems must be aligned with human values.


To this end, certain core themes of responsible AI (RAI) principles have been identified by both private entities and government agencies. In February 2021, Niti Aayog published the first part to the ‘Responsible AI for All’ series, citing the principles of safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and protection and reinforcement of positive human values. This is a comprehensive set of principles that aims to cover all aspects of AI impact and potential AI harm. Through this post, I seek to argue that among these principles, the role of transparency is crucial in achieving the remaining RAI principles, and must be prioritised for that reason.


Meaningful information regarding use of AI systems


Use of AI systems in decision making


Do individuals who are subject to an AI system have meaningful information regarding its processes and use? In the last few years, ADM systems have been used for several decision-making and recommendation services such as its use in credit scoring tools, calculating chances of recidivism in prison sentencing assessments, reviewing resumes for job applications, and recommending granting of visa or asylum based on automated review of applications. These decisions are being taken either by an AI system entirely or by a human primarily relying on its recommendation. Recently, a judge in the Punjab and Haryana High Court made headlines by relying on ChatGPT, a chat-based generative AI application, for bail jurisprudence in the petitioner’s specific circumstances. Use of such AI systems where the training datasets and internal biases may not be common knowledge to all its users can cause unfair and unintended outcomes. A lack of information regarding the goals and parameters driving the outputs of AI systems can undermine meaningful interaction with AI systems.


OQ v. Hesse- transparency and the right against automated decision-making


To counter the effects of such ADM systems, several countries provide a broad right against being subject to decisions made by ADM systems. This provision is most notably seen in Article 22 of the EU General Data Protection Regulation, 2016 (GDPR). Article 22 provides a right not to be subject to decisions made solely by an ADM system, especially when the decision has a legal or similarly significant effect on them.


The first case seeking enforcement of this right against a credit-scoring system’s decision has been raised before the CJEU. In OQ v Hesse, the CJEU is faced with the question on whether the ‘automated establishment of a probability value concerning the ability of a data subject to service a loan in the future constitutes a decision based solely on automated processing’. The advocate general has submitted their report, which acts as an independent non-binding recommendation for the CJEU’s consideration. In their report, the advocate general has opined that this automated probabilistic value (determined by the credit-scoring algorithm in question) would constitute a decision that has a legal or similarly significant effect, and therefore is open to an Article 22 challenge.


The advocate general’s report highlights the role of transparency and the information to be disclosed by the controller to the data principal when taking such a decision, including the personal data of the data principal being used in this process and the criteria used to calculate their credit score. Article 15(1)(h) of the GDPR provides a data subject with the right to access information regarding the personal data processed for ADM systems. Article 15(1)(h) also entitles data subjects to receive meaningful information regarding the logic involved, as well as the significance and envisaged consequences to the data subject. The advocate general in his opinion has affirmed the same, stating that ‘this provision must be interpreted as meaning that it covers, in principle, also the method of calculation used by a commercial information company for the purpose of establishing a score, provided that there is no conflicting interests worthy of protection.’ This case foreshadows the necessarily intertwined nature of the right to access information on the workings of AI systems and the right against being subject to decisions made by an ADM system.


Global parallels to the twin rights


These twin rights are also guaranteed in other jurisdictions across the world- a timely addition to the digital rights of citizens across the world. In China, Article 15 of the draft Internet Information Service Algorithmic Recommendation Management Provisions, 2021 can empower users to demand an explanation from ADM service providers where the ADM service provider may use algorithms to create a major influence on a user’s rights and interests. Article 20 of the Lei Geral de Proteção de Dados (LGPD), i.e., Brazil’s data protection law, entitles data subjects to receive clear, meaningful and adequate explanations regarding the criteria and processes used by the ADM, subject to commercial and trade secrets. Article 6.2.3 of Canada’s Directive on Automated Decision-making, 2019 requires all government programs using ADM systems to provide a meaningful explanation to affected individuals how and why the decision was made, Similar rights form part of the law in South Korea and South Africa as well, indicating the breadth of regulatory coherence on this issue. Further, Article 10 of the EU’s latest draft Artificial Intelligence Act prescribes transparency obligations regarding the original purpose of data processing and the need to process sensitive personal data. Such laws highlight the need for transparency as an important aspect of interacting AI systems.


Prioritizing transparency to achieve RAI principles


As we see in contemporary legal regulation around the use of AI systems for decision-making or recommendatory service, transparency serves a key feature in ensuring existing user rights regarding the use of their personal data. In addition to this, transparency also serves to achieve the goals of RAI principles such as reliability, inclusivity, privacy, security and accountability.


AI systems are generally trained through unsupervised machine learning, deep learning, reinforcement learning or semi-supervised learning. These training techniques allow AI systems to learn and base their decisions on vast their insights gleaned from large datasets and interactions within a simulated environment. Primarily, the use of personal data at the training stage raises concerns of violating patient privacy through possible re-identification. Further, this also causes the ‘black box’ issue within AI systems, where even the developer of the AI systems is unable to explain the internal logics and reasoning chosen by the AI system to arrive at its decision, causing a lack of transparency. Additionally, the lack of transparency is not limited to the training of algorithms per se. There is a wider aspect of opacity in terms of the data chosen to be processed, the impact of the AI system on society, the AI’s purpose, its metrics and parameters for generating the output. The need for comprehensive transparency, thus, is based on a larger need for identifying biases, issues with accuracy and the origins of incorrect assumptions resulting from contaminated or incomplete datasets. By providing transparency throughout the AI lifecycle, we can shed light on potential pitfalls and ensure accountability.


AI systems further run the risks of failing to be transparent regarding their societal impacts relating to its goals and the metrics and criteria chosen to achieve those goals. The fairness and necessity of those metrics or goals are important steps towards meaningful transparency around its use, with users having a right to know the larger implications of the use of AI systems. The inherent opacity of self-learning AI systems also raises concerns regarding detection of biases or lapses in accuracy due to corrupted or incomplete datasets, or learned biases from historically biased datasets. Lastly, AI systems are developed at various stages (such as algorithm coding, training and testing). Any deployed model passes through several entities, leaving it very difficult to identify where some inaccuracy or bias has crept into the model. This makes assessing legal liability for harms emanating from the AI system a challenge, posing an accountability issue commonly referred to as the ‘many hands’ problem.


For these reasons, it is essential to prioritize the transparency obligations relating to any AI system in order to achieve the goals of RAI principles such as reliability, inclusivity, privacy, security and accountability. To summarize, meaningful transparency around the design and deployment of AI systems is aimed to:


1. identify the source of incorrect assumptions and learned biases through datasets,

2. have data principals understand the rationale behind a significant decision taken which impacts their life and thus ensuring the right to access information;

3. identify the stage of design or deployment where the AI system’s inaccuracies crept in, allowing a greater level of accountability and mitigating the ‘many hands problem’ inherent to AI systems to an extent.


Recommendations for prioritizing transparency


The use of ex-post explainable AI (XAI) techniques


Today’s AI market features several entities seeking to design XAI techniques such as LIME to provide explanations for outputs offered by otherwise inscrutable AI systems. The common methods include developing AI systems that review the decision-making process of another AI system to explain it in a manner understandable to humans (by representing the data in decision-trees or visualizations), or by using AI models that create detailed logs of every step in the decision-making process.


These can be understood as companies providing an ex-post mechanism to better explain the flow of the decision-making process within the AI system. Examples of companies using ex-post systems to explain their AI systems, or build in fairness, include Google’s ‘What-if’ tool, Facebook’s ‘Fairness Flow’ tool, Microsoft’s ‘InterpretML’ software, IBM’s ‘AI Fairness 360’ toolkit, and several others. Ex-post mechanisms to explain the decision-making process by AI systems can help users gain a better understanding of the criteria used and data processed to arrive at the output decision. However, these technologies are nascent, limited in their scope and ineffective, with current forms of XAI being largely incomprehensible for ordinary users lacking technical know-how in this field. For achieving the purposes of transparency, XAI must be reimagined to provide meaningful transparency to its users in order to achieve RAI principles.


Ex-ante implementation of standards for specific use-case scenarios


Another alternative is the development of ex-ante technological transparency standards for different use-case scenarios of AI systems. The digital sector is already familiar with standard-based coherence. Technological standards by standard-setting organisations can similarly be used to ensure the development of interoperable industry-wide standards for ensuring transparency based on use-case features of AI systems. Some examples include the standardization request by the European Commission from CEN-CENELAC to develop standards for safe and trustworthy AI, and the IEEE 7001-2021 prepared by the Institute of Electrical and Electronics Engineers.


However, this idea runs into three major conceptual difficulties- the feasibility of creating such a set of interoperable standards for all use-cases, the general increase in the compliance burden of AI developers due to ex-ante standards which may stifle innovation, and any such standards being overbroad in their scope. As an immediate next step, ex-ante standards be narrowly imposed for high-risk AI systems that may significantly harm an individual due to the nature of its role or functions. For example, companies that deploy AI systems that offer any decision-making, recommendatory or probabilistic scoring services, and which significantly or legally affects individuals, should be encouraged to adopt meaningful transparency mechanisms. Here, meaningful transparency mechanisms may include mechanisms to explain the logic of AI systems, and governance frameworks that disclose the kind of data processed, algorithms, operational purposes and criteria set for the AI system to make its decisions.


 

*Dhruv Somayajula is a Senior Resident Fellow at Vidhi Centre for Legal Policy. The above views are personal. The author would like to thank Siddharth Johar for his assistance with the article

2 Comments


UnoGeeks Training
UnoGeeks Training
Jun 29

Thanks for the informative article. Unogeeks is the top SAP FICO Training Institute, which provides the best <a href="https://unogeeks.com/sap-fico-training" > SAP FICO Training </a>

Like

Nitin Bagade
Nitin Bagade
Aug 01, 2023

Hi Dhruv... very Well written and articulated. you have pointed out the rightly on the transperency part of AI in the coming future. I would say, it's akin to plagiarism in the Technology World.

Like

Recent

bottom of page