Overview

Algorithms and Artificial Intelligence play a key role nowadays in many technological systems that control or affect various aspects of our lives. Those systems make decisions impacting us but there is usually little understanding of why such decisions were taken, whether there was human involvement in them, and ultimately who bears responsibility for the decisions.

Against this background, PLEAD brings together an interdisciplinary team of technologists, legal experts, commercial companies and public organisations to investigate how provenance can help explain the logic that underlies automated decision-making to the benefit of data subjects as well as help data controllers to demonstrate compliance with the law.

Key Findings

01

Explainability by Design is a generic socio-technological methodology characterised by proactive measures to include explanations of decision making in the design rather than reactive measures attempting to bolt explanation capability on an application after the fact. It is organised in three phases containing concrete steps to guide an organisation in identifying, designing, and deploying explanation capability with their applications that addresses clearly defined regulatory and business requirements.

02

An explanation taxonomy to systematically characterise explanations from explainable requirements. The taxonomy follows a dimensional approach, with each dimension describing a different component towards a comprehensive explanation. It is policy-agnostic, regulation-agnostic, and can accommodate explainable requirements out of law, policy, or business needs.

03

The application of the Explainability-by-Design methodology to two scenarios of decision making: credit card applications and school placement. In particular, we identified and categorised their requirements for explanations using the above taxonomy in the context of GDPR and other applicable regulations. The technical design steps of the methodology were then employed to implement those requirements, resulting in an explanation service deployed as an online demonstrator of the methodology.

04

DXPlain: a distributed Web protocol for delivering explanations to end-users from disparate services, aggregated dynamically in a distributed data supply chain. The protocol ensures that individual explanation components are retrieved from their respective suppliers while guaranteeing that they are not available to the other parties in the supply chain, protecting commercially sensitive information from being exposed to unintended parties.

05

Two user studies were run with staff of Experian and Southampton City Council to compare the current practice to the PLEAD explanations generated for the two scenarios above, respectively. The studies confirmed a current gap in transparency and monitoring of decisions. The PLEAD explanations were deemed by the participants as a potential answer to address this gap while improving the personalisation and user-friendliness of the delivery.