Navigating transparency in EU’s Artificial Intelligence Act: a policy proposal

The EU’s AI Act demands multiple versions of transparency from AI systems. It requires sufficient transparency and human supervision from high-risk AI without defining what they mean. This policy proposal points to a model that can solve this problem.

October 28, 2022

Early last year, the European Commission launched a much anticipated legislative proposal — the AI Act, a draft legislation to promote ‘trustworthy AI’ in the EU. It was the culmination of several actions taken over the last few years where the EU has taken the lead in moving toward legislating for transparency in AI systems. The White Paper on AI issued by the European Commission in February 2020, and the European Parliament’s Framework of Ethical Aspects of AI in October 2020 both included transparency in the ethical and legal frameworks respectively. In line with these documents, their legislative proposal, the European Commission’s proposal for a Regulation on Artificial Intelligence released in April, 2021 (AI Act), clearly articulates transparency requirements in multiple forms, notably under Articles 13, 14 and 52.

The attention of the global technology policy community and industry was squarely on the AI Act. This was for two reasons. First, despite the proliferation of several ethical AI principles and policy documents over the last few years, there has been no regulatory proposal of note. [1] Most policy documents, while acknowledging the need for ethical AI and regulation had shied away from venturing into this tricky terrain. Second, the strict regulations in the EU have had a domino effect, particularly in the digital technology domain. Most global corporations use it as a benchmark to avoid having to comply with multiple jurisdictions. The global impact of EU’s regulation of digital technologies has perhaps been more profound than any other regime, as they influence emerging economies striving to offer adequate protections to its citizens as well as helping its local firms compete globally. 

A regulatory proposal for AI also responds well to concerns about its unchecked growth. A recent IBM study revealed that of the 7500 odd businesses it surveyed globally, 74% have not taken key steps to reduce bias, and 61% have paid little attention to explainability of AI powered decisions.

Why is algorithmic transparency needed?

The lack of sufficient information severely compromises human agency, and renders rights ineffectual. Our ability to make autonomous choices depends on our capacity to understand the environment we are engaging with, while making these choices. For instance, individuals who do not know how their personal data could be combined with other datasets by a service provider to draw inferences about them, do not have the necessary information to make autonomous choices. 

Pedro Domingos, in his book, The Master Algorithm, reminds us that learning algorithms [2] have remained opaque to the observers, calling them “an opaque machine that takes inputs, carries out some inscrutable process, and delivers unexplained outputs based on that process.” Users have no knowledge about how a machine learning algorithm turns terabytes of data, of different types and from varied sources, into particular insights that can inform decisions that impact them. This would be only worrying, but becomes unacceptable because AI and machine learning are all-pervasive. They are present in all facets of our lives from personalisation technology used by the likes of Amazon and Netflix that determine what media we consume, to stock-piling algorithms that impact the market and economy significantly. 

Despite the widespread use of AI in all aspects of our lives, we often lack the ability to understand how it works, and consequently question the decisions that it takes for or about us. The fallacy of the idea of the objectivity of machine learning algorithms and Big Data is well documented. In his talk on Big Data, the Internet and the law, Michael Brennan discusses various studies that show how algorithms can magnify bias. In the case of machine learning, [3] these problems are exacerbated as discriminatory effects of AI are realised regardless of an active intent to discriminate.

The ​​goal of transparency initiatives is to make the exercise of power, particularly by institutions, accessible and visible. Transparency is, therefore, an instrumental value, which leads to the realisation of other rights, and enables greater accountability through due process. [4]

Locating Transparency in the EU’s AI Act

There are several transparency-related provisions in the AI Act. There is a wide range of divergence in the degree of transparency they mandate as well as the nature of AI systems they apply to. 

Article 13 of the AI Act requires that high-risk AI systems should be designed and developed in such a way that their operation is sufficiently transparent so that users can interpret the system’s output and use it appropriately. Further, such systems need to provide instructions for use that include “concise, complete, correct and clear information that is relevant, accessible and comprehensible to users”. These are among several positive obligations that high-risk systems must discharge, which also include risk assessments and mitigation systems; high-quality datasets to minimise risks and discriminatory outcomes; activity logs to ensure traceability of results; detailed technical documentation; human oversight measures to minimise risk; and high level of robustness, security and accuracy. 

The focus of Article 14 is on individuals who will oversee the AI systems. It mandates that high-risk AI systems are designed with appropriate human-machine interface tools for persons overseeing them to be able to understand the capacities and limitations of the system fully; to remain aware of potential automation biases in (over)relying on the decision-making outputs of an AI system, and to decide or intervene in the operation thereof when safety and fundamental rights are at risk. 

Article 52 (1) also requires that AI systems intended to interact with individuals are designed and developed in such a way that they are informed that they are interacting with an AI system unless this is obvious from the circumstances and the context of use. 

While having transparency obligations, the draft legislation remains silent on the extent of transparency that will be required of AI systems, and what their interpretability to users will mean. 

The many facets of transparency in the AI Act

The provisions of the EU’s AI Act discussed above also asks for standards designed to ensure that the operation of AI systems is sufficiently transparent to enable users to interpret them. This approach also emphasises the need for transparency at the level of an algorithmic system rather than creating protections or entitlements with respect to individual decisions. There is a range of governance measures detailed in the EU’s AI Act, and transparency forms a critical part of these measures— 

  1. prohibition on use cases which involve unacceptable risks (manipulative subliminal techniques, exploitation of disability, social scoring purposes, remote biometric identification by law enforcement);

  2. wider range of regulations (registrations, sector-specific assessments, self-assessments, certifications, testing, risk management, transparency) on high-risk AI systems (Biometric identification; Education and vocational training; Migration, asylum and border control management; worker management; law enforcement; access to essential private and public services and entitlements);

  3. limited transparency obligations (self-identification of the system as using AI) for limited risk AI systems (systems that interact with humans; emotion recognition systems, biometric categorisation systems; AI systems that generate deep fakes);

  4. no mandatory obligations (with encouragement to create codes of conduct) for other low-risk AI systems.

The provisions mentioned in the previous section touch upon only a few versions of transparency but already emphasise its different facets.

Let us consider Article 13. The operative part of the provisions sets a legal threshold— sufficiently transparent to enable users to interpret the system’s output and use it appropriately. If the proposal were to become law in its current form, its implementation would be complicated. Based on how we answer multiple questions about the nature of transparency required, the implementation could vary. 

Article 13 ties transparency to ‘interpretability’, but there is little consensus on what interpretability means in XAI literature. One source defines it as the AI system’s ability to explain or to provide the meaning in ‘understandable’ terms to an individual. [5] In XAI literature, understandability is often defined as the ability of a model to make a human understand its function without any need for explaining its internal structure or the algorithmic means by which the model processes data internally. Another source defines interpretability in a contrasting manner, requiring traceability from input data to the output. This would mean that the internal structure and exact details of processing would also need to be transparent. If the EU were to adopt this second definition of interpretability, it would raise serious questions over the use of deep learning and neural-network algorithms in high-risk AI systems, as they render traceability extremely difficult, if not impossible. 

Article 13 also details the form—concise, complete, correct, clear, relevant, accessible, and comprehensible—in which information needs to be provided, several conventional transparency requirements to make any automated system workable—instructions to use, contact details of AI provider, the characteristics, capabilities, and limitations of performance of an AI system, its foreseeable changes and expected lifetime, human oversight measures. There are also provisions dedicated toward creating technical documentation which contains the information necessary to assess their compliance with other requirements and record-keeping requirements aimed at providing traceability. While these additional details are useful, it remains unclear how exactly the provision of the above information will enable interpretability. 

Article 14 notably takes a different route. Instead of tying transparency to an XAI term, it merely states what this transparency must achieve. Therefore, the implementation under Article 14 is agnostic to any specific form of transparency as long as it achieves human supervision in the manner described. This approach is useful in the absence of academic or legal consensus. However, Article 14’s scope is limited only to human oversight in the implementation of the AI system and does not apply to any other stakeholders including both users and adjudicators. 

Article 52, as mentioned earlier, is a particularly narrow version of transparency which only makes visible the fact of the existence of an AI system. 

Post facto Adequation as a Transparency Standard

In January 2019, I co-presented a regulatory proposal called post facto adequation [6] at FAT/Asia. I argued that in cases where decisions are made by an AI system, it must be noted if the system offers the opportunity for human supervision such that any stakeholder in the lifecycle can demand how a human analysis adequates to the insights of a machine learning algorithm. [7] This assessment of the opportunity for human supervision is based on the idea that where inferences are inherently opaque, they must provide sufficient information about the model and data analysed, such that a human supervisor must be in a position to apply analogue modes of analysis to the information available in order to conduct an independent assessment. 

Let us briefly view this proposal within the emerging European approach toward algorithmic transparency. In our paper, we distinguished our approach from the so-called right to explanation under the EU’s General Data Protection Regulation, another regulatory response to the problems of algorithmic transparency. The right to explanation primarily requires that the general algorithmic logic of an automated system making decisions based on personal data is revealed. The scope of the right is limited and applies not to all processing that involves automated means but to only to those which solely employ automated means. Most critically, it is not clear how such explanations will aid individuals or those representing their rights in raising questions over specific instances where the decisions of the automated system are unjust or opaque. [8]

The Digital Services Act further builds on this approach and requires transparency towards public authorities, independent auditors, and vetted researchers. Further, the Digital Services Co-ordinators, the primary oversight authority envisaged in the DSA may require access to specific data necessary to assess the risks and possible harms brought about by the platform’s systems, data on the accuracy, functioning and testing of algorithmic systems for content moderation, recommender systems or advertising systems, or data on processes and outputs of content moderation or of internal complaint-handling systems. Once again, there are general transparency obligations to help provide an overview of systems, rather than individual decisions. 

The range of regulatory proposals in the AI Act cover positive obligations on the creation and deployment of AI systems, however, they say little about the transparency or redressal mechanism available to individual instances of the use of an AI system. Article 52 is relevant for all users interacting with the AI system but it only mandates very limited transparency obligations which extend to informing the users that they are dealing with an AI system. Much of our focus has so far been on opening the black-box. However, what I propose here is an approach which sidesteps the black-box and strives for not complete transparency, but a meaningful level of transparency.

Article 13’s requirement for the operation of AI to be ‘sufficiently transparent’ to enable users to interpret the system’s output has the potential to operationalise a standard which can address individual use cases of AI. The proof of the pudding often is about how to define the standard of ‘sufficiently clear’ for models intended to achieve algorithmic transparency. Similarly, Article 14 sets a requirement for human supervision without specifying how this criteria may be met. 

I would argue that post facto adequation may present itself as a suitable regulatory standard. It draws from standards of due process and accountability evolved in administrative law, where decisions taken by public bodies must be supported by recorded justifications. Where the decision-making of the AI is opaque enough to prevent such transparency, the system needs to be built in such a way that it flags relevant information for independent human assessment to verify the machine’s inferences. This expectation in administrative law is all the more important in the European context with Article 1 of the Treaty of the European Union and Article 41(2)(c) of the Charter of Fundamental Rights of the EU requiring decisions to be taken as openly as possible to the citizen, and public authorities to make sufficiently clear reasons for their acts and decisions, respectively. Jurisdictions such as the UK and US also impose statutory obligations that require administrative authorities to give reasoned orders.

The application of post facto adequation as a standard for ‘sufficient transparency’ will mean that when inferences from machine learning algorithms influence decisions in high-risk systems, they can do so only if a human agent is in a position to look at the existing data and discursively arrive at the same conclusion. In subsequent essays, I will further develop the regulatory tool of post facto adequation as an effective solution to counter the impact of inferential opacity inherent to machine learning algorithms, which can provide more guidance on how it can be a workable standard for the purposes of Articles 13 and 14.

[1] An extended review of seventeen different AI ethics documents and their analysis against normative and applied ethical principles, led by me and published in 2020 is available here.

[2] From being an abstruse mathematical terminology used primarily by computer scientists, the word ‘algorithms’ has quickly become a part of the mainstream discourse. Donald Knuth defined it as “not a formula, but rather a word computer science needed to describe a strategy or an abstract method for accomplishing a task with a computer.” An algorithm is, in effect, a sequence of instructions that tells a computer what to do, typically by switching on and off billions of tiny transistors in a computer.

[3] Conventional algorithms involved an input and an algorithm which processed the input data leading to an output. In very crude terms, machine learning algorithms have data and desired output as the input, and the output is the algorithm that can turn the input into the desired output.

[4] In another essay, I critically analyse the transparency ideal, and whether it has some inherent value on its own, in the absence of the ends it strives for.

[5] While articulating this very general definition, Doshi-Velez and Kim are equally accepting of the lack of consensus on what interpretability may translate into in machine learning applications.

[6] It should be noted that Post facto Adequation is completely distinct from post-hoc explainability approaches, which are popular in the XAI literature.

[7] A few detailed examples of how this approach may be used in governance of use of machine learning in public functions are available here.

[8] For a lively debate on the right to explanation, please refer to the back and forth in the following papers: B Goodman and S Flaxman, “European Union Regulations on Algorithmic Decision-Making and a Right to Explanation” (2016) ICML Workshop on Human Interpretability in Machine Learning, arXiv:1606.08813 (v3); (2017) 38 AI Magazine 50; S Wachter, B Mittelstadt, and L Floridi, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation” (2017) 7 IDPL 76; and Andrew D Selbst, Julia Powles, “Meaningful information and the right to explanation” International Data Privacy Law, Volume 7, Issue 4, November 2017, Pages 233–242, https://doi.org/10.1093/idpl/ipx022.