Artificial intelligence (AI) is among the fastest growing emerging information digital technology. It helps businesses to streamline operational processes and to enhance the value of goods and services delivered to end-users and customers. Given AI is a data-intensive technology, policymakers are seeking ways to mitigate risks related to AI systems that process personal data, and technology lawyers are assisting with compliance efforts.
Recently, the UK Information Commissioner Office (ICO) published its Guidance on AI and Data Protection. The guidance follows the ICO’s 2018-2021 technology strategy publication identifying AI as one of its strategic priorities.
The AI guidance contains a framework to guide organizations using AI systems and aims to:
- Provide auditing tools and procedures the ICO will use to assess the compliance of organizations using AI; and
- Guide organizations on AI and data protection practices.
AI and Data Protection Guidance Purpose and Scope
The guidance solidifies the ICO’s commitment to the development of AI and supplements other resources for organizations such as the big data, AI, and machine learning report and the guidance on explaining decisions made with AI which the ICO produced in collaboration with the Alan Turing Institute in May 2020.
In the AI framework, the ICO adopts an academic definition of AI, which in the data protection context, refers to ‘the theory and development of computer systems able to perform tasks normally requiring human intelligence’. While the guidance focuses on machine-learning based AI systems, it may nonetheless apply to non-machine learning systems that process personal data.
The guidance seeks to answer three questions. First, do people understand how their data is being used? Second, is data being used fairly, lawfully and transparently? Third, how is data being kept secure?
To answer these questions, the ICO takes a risk-based approach to address different data protection principles including transparency, accountability and fairness. The framework outlines measures that organizations should consider when designing artificial intelligence regulatory compliance. The applicable laws driving this compliance are UK Data Protection Act 2018 (DPA 2018) and the General Data Protection Regulation (GDPR).
The ICO details key actions companies should take to ensure their data practices relating to AI system comply with the GDPR and UK data protection laws. The framework is divided into four parts focusing on (1) AI-specific implications of accountability principle (2) the lawfulness, fairness, and transparency of processing personal data in AI systems (3) security and data minimization in AI systems and (4) compliance with individual rights, including rights relating to solely automated decisions.
AI Best Practices
This section summarizes selected AI best practices outlined in the guidance organized around the four data protection areas. When working towards AI legal compliance, organizations should work with experienced lawyers who understand AI technologies to address the following controls and practices:
Part One: Accountability Principle
- Build a diverse, well-resourced team to support AI governance and risk management strategy
- Determine with legal the companies’ compliance obligations while balancing individuals’ rights and freedoms
- Conduct Data Protection Impact Assessment (DPIA) or other impact assessments where appropriate
- Understand the organization’s role: controller/processor when using AI systems
Part Two: Lawfulness, Fairness, and Transparency of Processing Personal Data
- Assess statistical accuracy and effectiveness of AI systems in processing personal data
- Ensure all people and processes involved understand the statistical accuracy, requirements and measures
- Evaluate tradeoffs and expectations
- Adopt common terminology that staff can use to communicate about the statistical models
- Address risks of bias and discrimination and work with legal to build into policies
Part Three: Principles of Security and Data Minimization in AI Systems
- Assess whether trained machine-learning models contains personally identifiable information
- Assess the potential use of trained -machine learning models
- Monitor queries from API’s users
- Consider ‘white box’ attacks
- Identify and process the minimum amount of data required to achieve the organization’s purpose
Part Four: Compliance with Individual Rights, Including Rights Relating to Solely Automated Decisions
- Implement reasonable measures respond to individual’s data rights requests
- Maintain appropriate human oversight for automated decision-making
The ICO anticipates developing a toolkit to complement the AI guidance. In the meanwhile, the salient points to the ICO guidance’s rests upon these key takeaway’s organizations should understand the applicable data protection laws and assemble the right team to address these requirements.
Building privacy and security early into the development of AI can provide efficiencies in the long-term to address the growing focus of regulatory authorities on ensuring that these technologies include data protection principles. Also working towards robust AI compliance efforts, organizations can find themselves having a competitive advantage. Beckage’s lawyers, many who are also technologists and have been trained by MIT regarding business use of AI, have been quoted in national media about AI topics. We stand ready to answer any of your questions.
*Attorney advertising. Prior results do not guarantee future outcomes.
Subscribe to our newsletter.