Site icon Beckage

Accountability and the Use of Artificial Intelligence

AI

As artificial intelligence (“AI”) and automated decision-making systems make their way into every corner of society – from businesses and schools to government agencies – concerns about using the technology responsibly and accountability are on the rise. 

The United States has always been on the forefront of technological innovations and our government policies have helped us remain there.  To that end, on February 11, 2019, President Trump issued an Executive Order on Maintaining American Leadership in Artificial Intelligence (No. 13,859).  See Exec. Order No. 13,859, 3 C.F.R. 3967.  As part of this Executive Order, the “American AI Initiative” was launched with five guiding principles:

  1. Driving technological breakthroughs; 
  2. Driving the development of appropriate technical standards; 
  3. Training workers with the skills to develop and apply AI technologies; 
  4. Protecting American values, including civil liberties and privacy, and fostering public trust and confidence in AI technologies; and
  5.  Protecting U.S. technological advantages in AI, while promoting an international environment that supports innovation. Id. at § 1. 

Finally, the Executive Order tasked the National Institute of Standards and Technology (“NIST”) of the U.S. Department of Commerce with creating a plan for the development of technical standards to support reliable, robust, and trustworthy AI systems.  Id. at § 6(d). To that end, the NIST released its Plan for Federal Engagement in Developing Technical Standards in August 2019.  See Nat’l Inst. of Standards & Tech., U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools (2019). 

While excitement over the use of AI was brewing in the executive branch, the legislative branch was concerned with its accountability as on April 10, 2019, the Algorithmic Accountability Act (“AAA”) was introduced into Congress.  See Algorithmic Accountability Act of 2019, S. 1108, H.R. 2231, 116th Cong. (2019).  The AAA covered business that: 

  1. Made more than $50,000,000 per year;
  2. Held data for greater than 1,000,000 customers; or
  3. Acted as a data broker to buy and sell personal information.  Id. at § 2(5). 

The AAA would have required business to conduct “impact assessments” on their “high-risk” automated decision systems in order to evaluate the impacts of the system’s design process and training data on “accuracy, fairness, bias, discrimination, privacy, and security”.  Id. at §§ 2(2) and 3(b).  These impact assessments would have required to be performed “in consultation with external third parties, including independent auditors and independent technology experts”.  Id. at § 3(b)(1)(C).  Following an impact assessment the AAA would have required that business reasonably address the result of the impact assessment in a timely manner.  Id. at § 3(b)(1)(D).  

It wasn’t just the federal government who is concerned about the use of AI in business as on May 20, 2019, the New Jersey Algorithmic Accountability Act (“NJ AAA”) was introduced into the New Jersey General Assembly.  The NJ AAA was very similar to the AAA in that it would have required businesses in the state to conduct impact assessments on “high risk” automated decisions. See New Jersey Algorithmic Accountability Act, A.B. 5430, 218th Leg., 2019 Reg. Sess. (N.J. 2019).  These “Automated decision system impact assessments” would have required an evaluation of the systems development “including the design and training data of the  automated  decision  system,  for  impacts  on accuracy,  fairness,  bias,  discrimination,  privacy,  and  security” as well as a cost-benefit analysis of the AI in light of its purpose.  Id. at § 2.  The NJ AAA would have also required businesses work with independent third parties, record any bias or threat to the security of consumers’ personally identifiable information discovered through the impact assessments, and provide any other information that is required by the New Jersey Director of the Division of Consumer Affairs in the New Jersey Department of Law and Public Safety.  Id

While the aforementioned legislation has appeared to have stalled, we nevertheless anticipate that both federal and state legislators will once again take up the task of both encouraging and regulating the use of AI in business as the COVID-19 pandemic subsides.  Our team at Beckage contains attorneys who are focused on technology, data security, and privacy and have the experience to advise your business on the best practices for the adoption of AI and automated decision-making systems. 

*Attorney Advertising. Prior results do not guarantee future outcomes. 

Subscribe to our Newsletter

Exit mobile version