October 28, 2022
Transparency in EU’s AI Act
The European Commission’s proposal for a Regulation on Artificial Intelligence (AI Act) clearly articulates transparency requirements in multiple forms.
Despite this emphasis, there is little clarity about how algorithmic transparency will play out. While having transparency obligations is vital, the draft legislation remains silent on the extent of transparency that will be required of AI systems, and what their ‘interpretability’ to users will mean. The legislative language jumps across several meanings of transparency. Article 13 ties transparency to interpretability, a much contested concept in the Explainable AI (XAI) discipline. One source defines it as the AI system’s ability to explain or to provide the meaning in ‘understandable’ terms to an individual, while another relates it to traceability from input data to the output. Article 14 altogether sidesteps the XAI literature and remains agnostic to any specific form of transparency as long as it achieves human supervision. Article 52 is a particularly narrow version of transparency which only makes visible the fact of the existence of an AI system. In a new policy proposal, I argue that post facto adequation is a suitable standard for setting transparency thresholds under the AI Act.
In January 2019, I co-presented a regulatory proposal called post facto adequation at FAT/Asia. I argued that in cases where decisions are made by an AI system, the system must offer sufficient opportunity for human supervision such that any stakeholder in the lifecycle can demand how a human analysis adequates to the insights of a machine learning algorithm. My standard for sufficient opportunity for human supervision required that the AI system must provide sufficient information about the model and data analysed, such that a human supervisor can apply analogue modes of analysis to the information available in order to conduct an independent assessment.
Article 13’s requirement for the operation of AI to be ‘sufficiently transparent’ to enable users to interpret the system’s output has the potential to operationalise a standard which can address individual use cases of AI. The proof of the pudding often is about how to define the standard of ‘sufficiently clear’ for models intended to achieve algorithmic transparency. Similarly, Article 14 sets a requirement for human supervision without specifying how this criteria may be met. In this policy proposal, I argue that post facto adequation may present itself as a suitable regulatory standard. It draws from standards of due process and accountability evolved in administrative law, where decisions taken by public bodies must be supported by recorded justifications. Where the decision-making of the AI is opaque enough to prevent such transparency, the system needs to be built in such a way that it flags relevant information for independent human assessment and verification.
The full policy proposal can be accessed here.