CIPL Files Response to NTIA’s Request for Comment on AI Accountability Policy
Time 3 Minute Read

On June 12, 2023, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth submitted a response to the U.S. National Telecommunications and Information Administration’s (“NTIA’s”) Request for Comments (“RFC”) on Artificial Intelligence (“AI”) Accountability. The NTIA’s RFC solicited comments on AI accountability measures and policies that can demonstrate trustworthiness of AI systems.

In its response to the NTIA, CIPL highlighted how organizational accountability mechanisms are essential to effective digital policy and regulation, including for responsible and trustworthy AI development and deployment. Specifically, accountability mechanisms allow organizations to demonstrate to their boards, customers, the public and regulators that a product or service meets specific criteria and is trustworthy. They also enable organizations to implement principle- and outcome-based legal requirements through measurable and demonstrable concrete steps and controls. There also is evidence that accountability mechanisms play an important role in providing legal certainty and business confidence, including in business-to-business contexts.

Further, CIPL highlighted the benefits of taking a risk-based and outcomes-based approach to AI governance, including through potential federal AI and privacy laws. This approach should be backed by innovative regulatory oversight and co-regulatory instruments that:

  • Rely on impact assessments performed by organizations to trigger the application of the law. The assessments should consider the context and impact of a proposed AI use, rather than the sector it is utilized in or its type. The regulatory framework would provide illustrations of rebuttable presumptions of high-risk, rather than rigid pre-defined classifications. Organizations would assess the overall output and impact of the AI application, including its benefits and potential reticence risk, rather than focusing on risk only.
  • Foster innovation through accountable practices of organizations. Rather than imposing prescriptive and indiscriminate requirements, the regulatory approach should set forth risk-based accountability requirements and outcomes that organizations should achieve through concrete, demonstrable and verifiable risk-based accountability measures.
  • Enable consistent and modern approaches to regulatory oversight. This approach should be complemented by a consistent scheme of voluntary, but enforceable, codes of conduct, certification and labelling, which should be designed through consultation with stakeholders. Regulatory sandboxes and policy prototyping can be useful for enabling regulatory iteration in response to technological innovations.

Read CIPL’s full response to the NTIA RC on AI Accountability.

CIPL has worked on accountable AI since 2018 and has published various reports and white papers on topics at the intersection of data protection and artificial intelligence. In 2023, CIPL launched a project to research organizations’ experiences using CIPL and other accountability frameworks to guide their AI accountability programs, to collect best practices, promote their wider adoption and support future co-regulatory mechanisms. CIPL intends to publish this research later in 2023.

You May Also Be Interested In

Time 3 Minute Read

The Connecticut Attorney General recently issued a legal memorandum regarding the application of existing Connecticut laws, such as the Connecticut Data Privacy Act, to the use of artificial intelligence.

Time 1 Minute Read

As reported on the Hunton Employment & Labor Perspectives blog, SB 574 is a California bill that would set specific duties for attorneys who use generative artificial intelligence and would restrict how arbitrators may use such tools in decision-making.

Time 3 Minute Read

SB 574 is a California bill that would set specific duties for attorneys who use generative artificial intelligence and would restrict how arbitrators may use such tools in decision-making. It would amend provisions in the Business and Professions Code and the Code of Civil Procedure to address confidentiality, accuracy, bias, and citation verification for attorneys, and to prohibit delegation of arbitral decision-making to AI while adding disclosure and responsibility requirements for arbitrators.

Time 3 Minute Read

The results are in: attorneys are filing more employment law cases in court.  Indeed, year-end reporting from legal databases like LexMachina confirm that the pace of filing new employment discrimination cases reached its highest level in 2025, surpassing 20,000 new filings nationwide.  Though overtime and minimum wage lawsuits under the Fair Labor Standards Act (FLSA) have continued to decline since 2015, discrimination cases under laws like Title VII of the Civil Rights Act of 1964 and the Americans with Disabilities Act are on the rise.

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Archives

Jump to Page