AIAccountability and the Use of Artificial Intelligence

Accountability and the Use of Artificial Intelligence

As artificial intelligence (“AI”) and automated decision-making systems make their way into every corner of society – from businesses and schools to government agencies – concerns about using the technology responsibly and accountability are on the rise. 

The United States has always been on the forefront of technological innovations and our government policies have helped us remain there.  To that end, on February 11, 2019, President Trump issued an Executive Order on Maintaining American Leadership in Artificial Intelligence (No. 13,859).  See Exec. Order No. 13,859, 3 C.F.R. 3967.  As part of this Executive Order, the “American AI Initiative” was launched with five guiding principles:

  1. Driving technological breakthroughs; 
  2. Driving the development of appropriate technical standards; 
  3. Training workers with the skills to develop and apply AI technologies; 
  4. Protecting American values, including civil liberties and privacy, and fostering public trust and confidence in AI technologies; and
  5.  Protecting U.S. technological advantages in AI, while promoting an international environment that supports innovation. Id. at § 1. 

Finally, the Executive Order tasked the National Institute of Standards and Technology (“NIST”) of the U.S. Department of Commerce with creating a plan for the development of technical standards to support reliable, robust, and trustworthy AI systems.  Id. at § 6(d). To that end, the NIST released its Plan for Federal Engagement in Developing Technical Standards in August 2019.  See Nat’l Inst. of Standards & Tech., U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools (2019). 

While excitement over the use of AI was brewing in the executive branch, the legislative branch was concerned with its accountability as on April 10, 2019, the Algorithmic Accountability Act (“AAA”) was introduced into Congress.  See Algorithmic Accountability Act of 2019, S. 1108, H.R. 2231, 116th Cong. (2019).  The AAA covered business that: 

  1. Made more than $50,000,000 per year;
  2. Held data for greater than 1,000,000 customers; or
  3. Acted as a data broker to buy and sell personal information.  Id. at § 2(5). 

The AAA would have required business to conduct “impact assessments” on their “high-risk” automated decision systems in order to evaluate the impacts of the system’s design process and training data on “accuracy, fairness, bias, discrimination, privacy, and security”.  Id. at §§ 2(2) and 3(b).  These impact assessments would have required to be performed “in consultation with external third parties, including independent auditors and independent technology experts”.  Id. at § 3(b)(1)(C).  Following an impact assessment the AAA would have required that business reasonably address the result of the impact assessment in a timely manner.  Id. at § 3(b)(1)(D).  

It wasn’t just the federal government who is concerned about the use of AI in business as on May 20, 2019, the New Jersey Algorithmic Accountability Act (“NJ AAA”) was introduced into the New Jersey General Assembly.  The NJ AAA was very similar to the AAA in that it would have required businesses in the state to conduct impact assessments on “high risk” automated decisions. See New Jersey Algorithmic Accountability Act, A.B. 5430, 218th Leg., 2019 Reg. Sess. (N.J. 2019).  These “Automated decision system impact assessments” would have required an evaluation of the systems development “including the design and training data of the  automated  decision  system,  for  impacts  on accuracy,  fairness,  bias,  discrimination,  privacy,  and  security” as well as a cost-benefit analysis of the AI in light of its purpose.  Id. at § 2.  The NJ AAA would have also required businesses work with independent third parties, record any bias or threat to the security of consumers’ personally identifiable information discovered through the impact assessments, and provide any other information that is required by the New Jersey Director of the Division of Consumer Affairs in the New Jersey Department of Law and Public Safety.  Id

While the aforementioned legislation has appeared to have stalled, we nevertheless anticipate that both federal and state legislators will once again take up the task of both encouraging and regulating the use of AI in business as the COVID-19 pandemic subsides.  Our team at Beckage contains attorneys who are focused on technology, data security, and privacy and have the experience to advise your business on the best practices for the adoption of AI and automated decision-making systems. 

*Attorney Advertising. Prior results do not guarantee future outcomes. 

Subscribe to our Newsletter

AI Hiring BiasAI Hiring Algorithms Present Big Questions About Accountability and Liability

AI Hiring Algorithms Present Big Questions About Accountability and Liability

As artificial intelligence (AI) becomes an increasingly prevalent human resources tool, the algorithms powering those hiring and staffing decisions have come under increased scrutiny for their potential to perpetuate bias and discrimination.

Are There Any Federal Laws or Regulations Governing the Use of AI in Hiring?

Under Title VII of the Civil Rights Act of 1964, the United States Equal Opportunity Commission (“EEOC”) is responsible for enforcing federal laws that make it illegal to discriminate against job applicants or employees because of their membership in a protected class.  For decades, attorneys have relied on the jointly issued Employment Tests and Selection Procedures by the Civil Service Commission, Department of Justice, Department of Labor and EEOC.  See generally 28 CFR § 50.14; see also Fact Sheet on Employment Tests and Selection Procedures, EEOCNevertheless, the current form of the Employment Tests and Selection Procedures fail to provide any guidance on the use of AI tools in the hiring process.   

That isn’t to say Federal regulators and legislators aren’t keen on regulating this area.  On December 8, 2020, ten United States Senators sent a joint letter to the EEOC regarding the EEOC’s authority to investigate the bias of AI driving hiring technologies.  In relevant part, the letter poses three questions:  

  1. Can the EEOC request access to “hiring assessment tools, algorithms, and applicant data from employers or hiring assessment vendors and conduct tests to determine whether the assessment tools may produce disparate impacts?
  2. If the EEOC were to conduct such a study, could it publish its findings in a public report?
  3. What additional authority and resources would the EEOC need to proactively study and investigate these AI hiring assessment technologies?  Id.

As of the current date, the EEOC has yet to respond to the letter.  Nevertheless, given the questions above, the current political climate, and the lack of current guidance from the EEOC, we anticipate future guidance, regulation, and potential enforcement actions in this area. 

How Are States Handling AI Hiring Bias? 

Illinois was first state to legislate in the area of the use of AI in hiring.  On August 9, 2019, Illinois enacted the Artificial Intelligence Video Interview Act (“AIVIA”), imposing strict limitations on employers who use AI to analyze candidate video interviews.  See 820 ILCS 42 et seq.  Under AIVIA, employers must: 

  1. Notify applicants that AI will be utilized during their video interviews;
  2.  Obtain consent to use AI in each candidate’s evaluation;  
  3. Explain to the candidates how the AI works and what characteristics the AI will track with regard to their fitness for the position; 
  4. Limit sharing of the video interview to those who have the requisite expertise to evaluate the candidate; and
  5. Comply with a candidate’s request to destroy his or her video within 30 days.  Id

Illinois was quickly followed up by Maryland, which on May 11, 2020 enacted legislation prohibiting an employer from using certain facial recognition services during a candidate’s interview for employment unless the candidate expressly consents.  See Md. Labor and Employment Code Ann. § 3-717.  The Maryland law specifically requires the candidate to consent to the use of certain facial recognition service technologies during an interview by signing a waiver which contains: 

  1. The candidate’s name;
  2. The date of the interview;
  3. that the candidate consents to the use of facial recognition during the interview;
  4. and that the candidate has read the waiver.  Id.

As with AIVIA, the emerging nature of the Maryland law does not provide much insight into how the law will be interpreted or enforced.

There are a number of other jurisdictions which have bills in different states of progress.  On February 20, 2020 a bill was introduced into the California legislature which would limit the liability of an employer or a purveyor of AI assisted employment decision making software under certain circumstances.  See 2019 Bill Text CA S.B. 1241.  This Californian bill “would create a presumption that an employer’s decision relating to hiring or promotion based on a test or other selection procedure is not discriminatory, if the test or procedure meets specified criteria, including, among other things, that it is job related and meets a business necessity” and “that the test or procedure utilizes pretested assessment technology that, upon use, resulted in an increase in the hiring or promotion of a protected class compared to prior workforce composition.”  Id. The bill would also require the employer to keep records of the testing or procedure and submit them for review to the California Department of Fair Employment and Housing, upon request, in order to qualify for the presumption and limit their liability.  Id

Not to be outdone, a bill was introduced into the New York City Counsel on February 27, 2020 with the purpose of regulating the sale of automated employment decision making tools.  See Int. No. 1894.  The New York City Council bill broadly defines automated employment decision making tool as “any system whose function is governed by statistical theory, or systems whose parameters are defined by such systems, including inferential methodologies, linear regression, neural networks, decision trees, random forests, and other learning algorithms, which automatically filters candidates or prospective candidates for hire or for any term, condition or privilege of employment in a way that establishes a preferred candidate or candidates.”  Id.  The bill seeks to prohibit the sale of automated employment decision making tools if they were not the subject of an audit for bias in the past year prior to sale, were not sold with a yearly bias audit service at no additional cost, and were not accompanied by a notice that the tool is subject to the provisions of the New York City Council’s bill.  Id.  The bill would require any person who uses automated employment assessment tools for hiring and other employment purposes to disclose to candidates, within 30 days, when such tools were used to assess their candidacy for employment, and the job qualifications or characteristics for which the tool was used to screen.  Id.  Finally, the bill is not without bite, as violator are subject to “a civil penalty of not more than $500 for that person’s first violation and each additional violation occurring on the same day as the first violation, and not less than $500 nor more than $1,500 for each subsequent violation.”  Id.

What Can My Business Do Now to Prepare for Potential Liability Related to the Use of AI in Hiring?

As the current political and legal landscape continues to be in flux, one of the best things your business can do is stay on top of current statutes.  Your business could also audit both internal and external use of AI in hiring to validate and confirm the absence of bias in the system; however, testing external systems may require your vendors to open their proprietary technology and information to their customers, something that most are hesitant to do.  Finally, your business should consider conducting a thorough review of any and all indemnification provisions in its vendor agreements to see how risk might be allocated between the parties.

Beckage is a law firm focused on technology, data security, and privacy. Beckage has an experienced team of attorneys and technologists who can advise your business on the best practices for limiting its liability related to the use of AI in hiring.

*Attorney Advertising. Prior results do not guarantee future outcomes.

Subscribe to our Newsletter.

Artificial IntelligenceArtificial Intelligence Best Practices: The UK ICO AI and Data Protection Guidance

Artificial Intelligence Best Practices: The UK ICO AI and Data Protection Guidance

Artificial intelligence (AI) is among the fastest growing emerging information digital technology. It helps businesses to streamline operational processes and to enhance the value of goods and services delivered to end-users and customers. Given AI is a data-intensive technology, policymakers are seeking ways to mitigate risks related to AI systems that process personal data, and technology lawyers are assisting with compliance efforts.

Recently, the UK Information Commissioner Office (ICO) published its Guidance on AI and Data Protection. The guidance follows the ICO’s 2018-2021 technology strategy publication identifying AI as one of its strategic priorities.  

The AI guidance contains a framework to guide organizations using AI systems and aims to:

  • Provide auditing tools and procedures the ICO will use to assess the compliance of organizations using AI; and  
  • Guide organizations on AI and data protection practices.

AI and Data Protection Guidance Purpose and Scope

The guidance solidifies the ICO’s commitment to the development of AI and supplements other resources for organizations such as the big data, AI, and machine learning report and the guidance on explaining decisions made with AI which the ICO produced in collaboration with the Alan Turing Institute in May 2020.

In the AI framework, the ICO adopts an academic definition of AI, which in the data protection context, refers to ‘the theory and development of computer systems able to perform tasks normally requiring human intelligence’. While the guidance focuses on machine-learning based AI systems, it may nonetheless apply to non-machine learning systems that process personal data.

The guidance seeks to answer three questions. First, do people understand how their data is being used? Second, is data being used fairly, lawfully and transparently? Third, how is data being kept secure?

To answer these questions, the ICO takes a risk-based approach to address different data protection principles including transparency, accountability and fairness. The framework outlines measures that organizations should consider when designing artificial intelligence regulatory compliance. The applicable laws driving this compliance are UK Data Protection Act 2018 (DPA 2018) and the General Data Protection Regulation (GDPR).

The ICO details key actions companies should take to ensure their data practices relating to AI system comply with the GDPR and UK data protection laws. The framework is divided into four parts focusing on (1) AI-specific implications of accountability principle (2) the lawfulness, fairness, and transparency of processing personal data in AI systems (3) security and data minimization in AI systems and (4) compliance with individual rights, including rights relating to solely automated decisions.

AI Best Practices

This section summarizes selected AI best practices outlined in the guidance organized around the four data protection areas. When working towards AI legal compliance, organizations should work with experienced lawyers who understand AI technologies to address the following controls and practices:

Part One: Accountability Principle

  • Build a diverse, well-resourced team to support AI governance and risk management strategy
  • Determine with legal the companies’ compliance obligations while balancing individuals’ rights and freedoms
  • Conduct Data Protection Impact Assessment (DPIA) or other impact assessments where appropriate
  • Understand the organization’s role: controller/processor when using AI systems

Part Two: Lawfulness, Fairness, and Transparency of Processing Personal Data

  • Assess statistical accuracy and effectiveness of AI systems in processing personal data
  • Ensure all people and processes involved understand the statistical accuracy, requirements and measures
  • Evaluate tradeoffs and expectations
  • Adopt common terminology that staff can use to communicate about the statistical models
  • Address risks of bias and discrimination and work with legal to build into policies

Part Three: Principles of Security and Data Minimization in AI Systems

  • Assess whether trained machine-learning models contains personally identifiable information
  • Assess the potential use of trained -machine learning models
  • Monitor queries from API’s users
  • Consider ‘white box’ attacks
  • Identify and process the minimum amount of data required to achieve the organization’s purpose

Part Four: Compliance with Individual Rights, Including Rights Relating to Solely Automated Decisions

  • Implement reasonable measures respond to individual’s data rights requests
  • Maintain appropriate human oversight for automated decision-making

The ICO anticipates developing a toolkit to complement the AI guidance. In the meanwhile, the salient points to the ICO guidance’s rests upon these key takeaway’s organizations should understand the applicable data protection laws and assemble the right team to address these requirements.

Building privacy and security early into the development of AI can provide efficiencies in the long-term to address the growing focus of regulatory authorities on ensuring that these technologies include data protection principles.  Also working towards robust AI compliance efforts, organizations can find themselves having a competitive advantage.  Beckage’s lawyers, many who are also technologists and have been trained by MIT regarding business use of AI, have been quoted in national media about AI topics.  We stand ready to answer any of your questions.

*Attorney advertising. Prior results do not guarantee future outcomes.

Subscribe to our newsletter.

Disinformation and Deep FakesThe Risks Associated with Disinformation and Deep Fakes

The Risks Associated with Disinformation and Deep Fakes

Disinformation is the deliberate spreading of false information about individuals or businesses to influence public perceptions about people and entities.  Computers that manipulate the media, known as deep fakes, advance the dangers of influenced perceptions.  Deep fakes can be photos, videos, audio, and text manipulated by artificial intelligence (AI) to portray known persons acting or speaking in an embarrassing or incriminating way.  With the advancements of deep fakes becoming more believable and easier to produce, disinformation is spreading at alarming rates.  Some risks that arise with disinformation include:

·       Damage to Reputation

Reputational damage targets companies of all sizes with rumors, exaggerations, and lies that harm the reputation of the business for economic strategy and gain. Remedying reputational damage may require large sums of money, time, and other resources to prove the media was forged.

·       Blackmail and Harassment

Photos, audio, and text manipulated by AI can be used to embarrass or extort business leaders, politicians, or public figures through the media.

·       Social Engineering and Fraud

Deep fakes can be used to impersonate corporate executives’ identities and facilitate fraudulent wire transfers.  These tactics are a new variation of Business E-mail Compromise (BEC), traditionally considered access to an employee or business associate’s email account by an impersonator with the intent to trick companies, employees, or partners into sending money to the infiltrator.

·       Credential Theft and Cybersecurity Attacks

Hackers can also use sophisticated impersonation and social engineering to gain informational technology credentials through unknowing employees.  After gaining access, the hacker can steal company data and personally identifiable information or infect the company’s system with malware or ransomware.

·       Fraudulent Insurance Claims

Insurance companies rely on digital graphics to settle claims, but photographs are becoming less reliable as evidence because they are easy to manipulate with AI.  Insurance companies will need to modify policies, training, practices, and compliance programs to mitigate risk and avoid fraud.

·       Market Manipulation

Another way scammers seek to profit from disinformation is through the use of fake news reports and social media schemes using phony text and graphics to impact financial markets.  Traders who use social post and headline-driven algorithms to make market decisions may find themselves prey to these types of schemes.  As accessibility to realistic but manipulated video and audio increases, these misperceptions and disinformation will become substantially more believable and difficult to correct.

·     Falsified Court Evidence

Deep fakes also pose a threat to the authenticity of media evidence presented to the court.  If falsified video and audio files are entered as evidence, they have the potential to trick jurors and impact case outcomes.  Moving forward, courts will need to be trained to scrutinize potentially manipulated media.

·     Cybersecurity Insurance

Cybersecurity insurance helps cover businesses from financial ruin but has not historically covered damages due to disinformation.  Private brands, businesses, and corporations should consider supplementing their current insurance policies to address disinformation to help protect themselves from risk.

Legal Options

There are legal avenues that can be pursued in responding to disinformation.  Deep fakes that falsely depict individuals in a demeaning or embarrassing way are subject to laws regarding defamation, trade libel, false light, violation of right of publicity, or intentional infliction of emotional distress if the deep fake contains the image, voice, or likeness of a public figure.  

Preventative Steps

Apart from understanding the risks associated with disinformation, companies can work to protect themselves from disinformation and deep fakes by:

1. Engaging in social listening to understand how a company’s brand is viewed by the public.

2. Assessing the risks associated with the business’ employed practices.

3. Registering the business trademark to have the protection of federal laws.

4. Having an effective incident response plan in the event of disinformation, deep fakes, or data breach to mitigate costs and prevent further loss or damage.

5. Communicating with social media platforms in which disinformation is being spread.

6. Speaking directly to the public, the media, and their customers via social media or other means.

7. Bringing a lawsuit into court if a business is being defamed or the market is manipulated.

What To Do When Facing Disinformation

If a business is facing disinformation, sophisticated tech lawyers can assist in determining rights and technological solutions to mitigate harm.  Businesses are not defenseless in the face of disinformation and deep fakes but should expand their protective measures to mitigate the risks associated.  

About Beckage

Beckage is a team of skillful technology attorneys who can help you protect your company from cyber attacks and defamation cause by disinformation and deep fakes. Our team of certified privacy professionals and lawyers can help you navigate the legal scope of the expanding field of disinformation.

*Attorney Advertising.  Prior results do not guarantee similar outcomes.*

Subscribe to our newsletter.

Algorithmic BiasAlgorithmic Bias – What Businesses Need to Know

Algorithmic Bias – What Businesses Need to Know

Algorithms, artificial intelligence (AI), “data scraping” and other means of evaluating vast amounts of information about people have indeed become widespread and are increasingly common tools in the hiring toolbox. As predicted the use and scope of big data has grown exponentially over the past several years and continues to influence employment and hiring decisions. We are operating in a world where automated algorithms make impactful decisions that amplify the power of business. However, as with the use of any new technology, the legal landscape for businesses is rapidly changing so it is critical to closely evaluate these tools before incorporating them into your hiring practices. Why? Because these tools may unintentionally discriminate against a protected group.  

The challenge is straightforward: AI algorithms are based on datasets collected or selected by humans. That means those data sets are subject to intentional or unintentional bias, which could lead to biased algorithmic models. Examples of algorithmic bias have already started popping up in the news. In 2018, for example, a large company decided to scrap its proprietary hiring algorithm when it discovered the algorithm was biased in favor of men, simply because the algorithm was trained on patterns from resumes received over the past 10 years—resumes that were mostly from men because the tech industry skews male. So, rather than taking away the existing bias against women in technology, this company’s system amplified the bias.

How the EEOC is Handling Algorithmic Discrimination

In the face of increasingly broad use of algorithms the Equal Employment Opportunity Commission (EEOC) is responsible for enforcing federal laws that make it illegal to discriminate against job applicants or employees because of their membership in a protected class. The EEOC has begun to challenge the use of hiring and employment practices that have a statistically significant disparate impact on a certain group and cannot be justified as a business necessity. The EEOC expects companies that use algorithms and AI to take reasonable measures to test the algorithms functionality in real-world scenarios to ensure the results are not biased, in addition the EEOC expects companies to test their algorithms often. The EEOC has also redefined the protected category of “sex”, for example, to include sexual orientation and gender identity. With these changes it is possible that the number and type of individuals protected from discrimination will continue to expand.

How Businesses Are Mitigating Risk

Lacking any concrete laws or guidelines, how can businesses mitigate the risks around algorithmic hiring systems? The key is using extreme vigilance and strong contracting practices if or when your business is relying on AI in recruiting and selecting candidates even when trusting on third-party vendors. Companies are responsible for ongoing and daily assessments and audits of their own algorithms and hiring practices. If a third party is providing or managing the algorithms used to make hiring decisions, it’s still up to the employer to scrutinize validation claims and results before acting. It is also wise to consider including indemnification, hold harmless clauses and appropriate disclaimers in any agreements. The Beckage Emerging Technologies team and AI Practice Group at Beckage are ready to help assess how your business can use algorithms in your hiring practices effectively and responsibly and to help clients deploying AI driven services and products in areas such as compliance with laws and regulations, data privacy issues, and AI governance and ethics.

*Attorney Advertising. Prior Results Do Not Guarantee A Similar Outcome.

Subscribe to our newsletter.

1 2