CIPL Publishes Report on Tools for Accountable AI
Time 3 Minute Read

On February 27, 2020, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP published the second report in its project on Artificial Intelligence (“AI”) and Data Protection: Delivering Sustainable AI Accountability in Practice.

The second report, “Hard Issues and Practical Solutions,” aims to provide insights into emerging solutions for delivering trusted and responsible AI. In particular, the report:

  • Dives deeper into some of the tensions that exist between AI and data protection principles (e.g., issues of fairness, transparency, purpose specification and use limitation, and data minimization) and puts forward concrete approaches to mitigating them;
  • Examines future AI solutions, including the need for technology neutral solutions, a risk-based approach to AI and the need for data stewardship and organizational accountability;
  • Outlines best practices and the wide range of tools that organizations are currently developing to enable accountable and human centric AI, including AI Data Protection Impact Assessments, Data Review Boards, and accountable AI frameworks; and
  • Maps best practices in AI governance to CIPL’s Accountability Wheel, a data privacy accountability framework comprised of the seven essential elements of accountability (i.e., leadership and oversight, risk assessment, policies and procedures, transparency, training and awareness, monitoring and verification, and response and enforcement).

Some of the key findings of the report include:

  • While defining and implementing fairness is a challenge, it is also an opportunity. AI can ultimately help facilitate the goals of fairness—either by helping to illuminate and mitigate historical biases or by providing more consistent and rational decision making;
  • Transparency is a broad concept in the context of AI and includes explainability, understandability, traceability, articulation of benefits and communication of rights, and avenues for redress;
  • Organizations and regulators must balance providing meaningful purpose specification and use limitation while also providing flexibility to react to new inferences from old or different data sets;
  • Distinguishing between training and deployment phases for purposes of data minimization could help balance innovation while fostering better data protection for individuals;
  • There is a need to further develop a better understanding of harms—particularly the potential non-material harms that may occur when collecting and processing data. Analyzing the risk of deploying new models requires understanding how to assess and measure harm and its likelihood of materializing in these new contexts, ranging from monetary harms to nonphysical harms, such as privacy, security, and discriminatory impacts, among others;
  • Efficient and speedy redress is likely to assume new importance in the effective governance of AI; and
  • Accountability frameworks, including on the basis of the CIPL Accountability Wheel, can be used to help organizations develop, deploy and organize robust and comprehensive data protection measures in the AI context and also to demonstrate accountability in AI.

Read about these highlights in more detail along with all of the other key findings.

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Archives

Jump to Page