BIPABIPA Suits Against Third Parties: An Emerging Trend

BIPA Suits Against Third Parties: An Emerging Trend

Companies should take note of the recent expansion of biometric privacy laws, that could have significant impact on their businesses, changing how they collect and process biometric data and how third party vendors handle such data.

Background on BIPA

The Illinois Biometric Information Privacy Act (BIPA) was passed on October 3, 2008, and regulates how “private entities” collect, use, and share biometric information and biometric identifiers, collectively known as biometric data.  BIPA imposes certain security requirements including:

1. Developing a publicly available written policy regarding the retention and destruction of biometric data in an entity’s possession.

2. Providing required disclosures and obtaining written releases prior to obtaining biometric data.

3. Prohibiting the sale of biometric data.

4. Prohibiting the disclosure of biometric data without obtaining prior consent.

Expansion of BIPA to Third Party Vendors

In a significant turn of events, courts in Illinois are applying BIPA to third party vendors who do not have direct relationships with plaintiffs, but whose products are used by plaintiff’s employees or in other settings to collect plaintiff’s biometric data.

This is an alarming expansion of BIPA’s scope of which all third-party providers should be aware.  Under this caselaw, putting a biometric-collecting product into the stream of commerce does not immunize the manufacturer of that product from suit in Illinois.

Since the passing of BIPA, numerous class actions suits have been filed against those alleged to have collected plaintiffs’ biometric data, but claims brought up against vendors that sell the biometric equipment are exponentially growing.  These claims allege not that plaintiffs have had direct contact with the vendor defendants, but that the defendants obtained the plaintiff’s biometric data through timekeeping equipment without complying to BIPA’s requirements.

Recently, the U.S. District Court for the Northern District of Illinois held that a biometric time clock vendor could be liable for violations of BIPA in the context of employment, extending the liability to people who “collect” biometric information.  

Another recent decision, Figueroa et al v. Kronos, held that the plaintiffs sufficiently alleged that the collection function extended to the company, Kronos, and was responsible, along with the employer, for obtaining required employee consent.

These cases, among others, signify that third-party vendors are becoming defendants in BIPA consent cases and broaden third party contribution claims brought by employers against the vendors of Biometric clocks for failure to obtain required consent.  These decisions also allow insured employers to seek contributions from clock vendors for any judgement assessed against an insured employer under the Employment Practices Liability (EPL).

However, BIPA’s Section 15(a), which requires publicly available policies for the retention and destruction of biometric data, makes it difficult for plaintiffs to make claims against third parties in federal court.  BIPA Section 15(a) creates an issue of standing.  A state federal court could exercise jurisdiction over a vendor in connection with a BIPA claim if the vendor maintained continuous and systematic contacts with Illinois.  If the vendor is located in the forum state, then there is no jurisdictional dispute, but since many vendors sell their equipment nationally, the issue of whether the court has specific personal jurisdiction of the vendor must be addressed.

For example, in Bray v. Lathem Time Co., the US District Court for the Central District of Illinois alleged that the defendant sold a facial-recognition time keeping product to the plaintiff’s employer and violated BIPA because they failed to notify employees and obtain their consent.  The plaintiffs had no dealing with the defendant, who was located in Georgia but was sued in Illinois.  The court found no contacts between the defendant and the state of Illinois and concluded that the time keeping equipment was sold to an affiliate of the plaintiff’s employer and then transferred to Illinois by the employer.  The court concluded that it lacked jurisdiction over the defendant vendor.

Expansion of BIPA Outside Illinois?

Vendors being located in states outside of Illinois raises the question of whether BIPA is applicable to conduct in other states.  But while BIPA is applied to violations in Illinois, upcoming class suits may address the issue of BIPA having an extraterritorial effect when bringing claims against out of state vendors.  The extraterritorial application of BIPA is fact-dependent and courts acknowledge that decertifying extraterritoriality as being evaluated on an individual basis may be appropriate.  Companies collecting, using, and storing biometric information will face an increased risk in BIPA lawsuits.

Take-A-Ways

All companies should assess whether they are collecting biometric data, directly or through third parties.  Next is to evaluate the legal requirements regarding the handling of such data.  Note, many state data breach laws include biometric data as protected personally identifiable information (PII).  Companies should take steps to comply with applicable laws, including developing policies and practices around handling biometric data.  Also, contracts with third party vendors should be reviewed to help protect the business if there is mishandling of biometric data.

About Beckage

At Beckage, we have a team of skilled attorneys that can assist your company in developing BIPA compliant policies that will help mitigate the risks associated with collecting biometric information.  Our team of lawyers are also technologists who can help you better understand the legal implications surrounding BIPA and the legal repercussions that follow suit.

Subscribe to our newsletter.

*Attorney Advertising.  Prior results do not guarantee future outcomes. *

Disinformation and Deep FakesThe Risks Associated with Disinformation and Deep Fakes

The Risks Associated with Disinformation and Deep Fakes

Disinformation is the deliberate spreading of false information about individuals or businesses to influence public perceptions about people and entities.  Computers that manipulate the media, known as deep fakes, advance the dangers of influenced perceptions.  Deep fakes can be photos, videos, audio, and text manipulated by artificial intelligence (AI) to portray known persons acting or speaking in an embarrassing or incriminating way.  With the advancements of deep fakes becoming more believable and easier to produce, disinformation is spreading at alarming rates.  Some risks that arise with disinformation include:

·       Damage to Reputation

Reputational damage targets companies of all sizes with rumors, exaggerations, and lies that harm the reputation of the business for economic strategy and gain. Remedying reputational damage may require large sums of money, time, and other resources to prove the media was forged.

·       Blackmail and Harassment

Photos, audio, and text manipulated by AI can be used to embarrass or extort business leaders, politicians, or public figures through the media.

·       Social Engineering and Fraud

Deep fakes can be used to impersonate corporate executives’ identities and facilitate fraudulent wire transfers.  These tactics are a new variation of Business E-mail Compromise (BEC), traditionally considered access to an employee or business associate’s email account by an impersonator with the intent to trick companies, employees, or partners into sending money to the infiltrator.

·       Credential Theft and Cybersecurity Attacks

Hackers can also use sophisticated impersonation and social engineering to gain informational technology credentials through unknowing employees.  After gaining access, the hacker can steal company data and personally identifiable information or infect the company’s system with malware or ransomware.

·       Fraudulent Insurance Claims

Insurance companies rely on digital graphics to settle claims, but photographs are becoming less reliable as evidence because they are easy to manipulate with AI.  Insurance companies will need to modify policies, training, practices, and compliance programs to mitigate risk and avoid fraud.

·       Market Manipulation

Another way scammers seek to profit from disinformation is through the use of fake news reports and social media schemes using phony text and graphics to impact financial markets.  Traders who use social post and headline-driven algorithms to make market decisions may find themselves prey to these types of schemes.  As accessibility to realistic but manipulated video and audio increases, these misperceptions and disinformation will become substantially more believable and difficult to correct.

·     Falsified Court Evidence

Deep fakes also pose a threat to the authenticity of media evidence presented to the court.  If falsified video and audio files are entered as evidence, they have the potential to trick jurors and impact case outcomes.  Moving forward, courts will need to be trained to scrutinize potentially manipulated media.

·     Cybersecurity Insurance

Cybersecurity insurance helps cover businesses from financial ruin but has not historically covered damages due to disinformation.  Private brands, businesses, and corporations should consider supplementing their current insurance policies to address disinformation to help protect themselves from risk.

Legal Options

There are legal avenues that can be pursued in responding to disinformation.  Deep fakes that falsely depict individuals in a demeaning or embarrassing way are subject to laws regarding defamation, trade libel, false light, violation of right of publicity, or intentional infliction of emotional distress if the deep fake contains the image, voice, or likeness of a public figure.  

Preventative Steps

Apart from understanding the risks associated with disinformation, companies can work to protect themselves from disinformation and deep fakes by:

1. Engaging in social listening to understand how a company’s brand is viewed by the public.

2. Assessing the risks associated with the business’ employed practices.

3. Registering the business trademark to have the protection of federal laws.

4. Having an effective incident response plan in the event of disinformation, deep fakes, or data breach to mitigate costs and prevent further loss or damage.

5. Communicating with social media platforms in which disinformation is being spread.

6. Speaking directly to the public, the media, and their customers via social media or other means.

7. Bringing a lawsuit into court if a business is being defamed or the market is manipulated.

What To Do When Facing Disinformation

If a business is facing disinformation, sophisticated tech lawyers can assist in determining rights and technological solutions to mitigate harm.  Businesses are not defenseless in the face of disinformation and deep fakes but should expand their protective measures to mitigate the risks associated.  

About Beckage

Beckage is a team of skillful technology attorneys who can help you protect your company from cyber attacks and defamation cause by disinformation and deep fakes. Our team of certified privacy professionals and lawyers can help you navigate the legal scope of the expanding field of disinformation.

*Attorney Advertising.  Prior results do not guarantee similar outcomes.*

Subscribe to our newsletter.

Algorithmic BiasAlgorithmic Bias – What Businesses Need to Know

Algorithmic Bias – What Businesses Need to Know

Algorithms, artificial intelligence (AI), “data scraping” and other means of evaluating vast amounts of information about people have indeed become widespread and are increasingly common tools in the hiring toolbox. As predicted the use and scope of big data has grown exponentially over the past several years and continues to influence employment and hiring decisions. We are operating in a world where automated algorithms make impactful decisions that amplify the power of business. However, as with the use of any new technology, the legal landscape for businesses is rapidly changing so it is critical to closely evaluate these tools before incorporating them into your hiring practices. Why? Because these tools may unintentionally discriminate against a protected group.  

The challenge is straightforward: AI algorithms are based on datasets collected or selected by humans. That means those data sets are subject to intentional or unintentional bias, which could lead to biased algorithmic models. Examples of algorithmic bias have already started popping up in the news. In 2018, for example, a large company decided to scrap its proprietary hiring algorithm when it discovered the algorithm was biased in favor of men, simply because the algorithm was trained on patterns from resumes received over the past 10 years—resumes that were mostly from men because the tech industry skews male. So, rather than taking away the existing bias against women in technology, this company’s system amplified the bias.

How the EEOC is Handling Algorithmic Discrimination

In the face of increasingly broad use of algorithms the Equal Employment Opportunity Commission (EEOC) is responsible for enforcing federal laws that make it illegal to discriminate against job applicants or employees because of their membership in a protected class. The EEOC has begun to challenge the use of hiring and employment practices that have a statistically significant disparate impact on a certain group and cannot be justified as a business necessity. The EEOC expects companies that use algorithms and AI to take reasonable measures to test the algorithms functionality in real-world scenarios to ensure the results are not biased, in addition the EEOC expects companies to test their algorithms often. The EEOC has also redefined the protected category of “sex”, for example, to include sexual orientation and gender identity. With these changes it is possible that the number and type of individuals protected from discrimination will continue to expand.

How Businesses Are Mitigating Risk

Lacking any concrete laws or guidelines, how can businesses mitigate the risks around algorithmic hiring systems? The key is using extreme vigilance and strong contracting practices if or when your business is relying on AI in recruiting and selecting candidates even when trusting on third-party vendors. Companies are responsible for ongoing and daily assessments and audits of their own algorithms and hiring practices. If a third party is providing or managing the algorithms used to make hiring decisions, it’s still up to the employer to scrutinize validation claims and results before acting. It is also wise to consider including indemnification, hold harmless clauses and appropriate disclaimers in any agreements. The Beckage Emerging Technologies team and AI Practice Group at Beckage are ready to help assess how your business can use algorithms in your hiring practices effectively and responsibly and to help clients deploying AI driven services and products in areas such as compliance with laws and regulations, data privacy issues, and AI governance and ethics.

*Attorney Advertising. Prior Results Do Not Guarantee A Similar Outcome.

Subscribe to our newsletter.

2019 Year in Review_ Beckage Blog Top 52019 Year in Review: Beckage Blog Top 5

2019 Year in Review: Beckage Blog Top 5

The end of the year is finally upon us. As the year draws to a close, we look back over our most popular blog posts of 2019. From understanding New York’s SHIELD Act to website accessibility claims under the Americans with Disabilities Act and gearing up for the California Consumer Protection Act (CCPA), it has certainly been a great year for the Beckage team. We pride ourselves on producing informative and timely content to our community in this fast-moving legal landscape. For this reason, we have picked out our very best blog posts from 2019 just in case you missed any of our top posts. We thank you all for your continued support, Happy Holidays from all of us!

Read More
Recent Lawsuit Provides Insight on Intersection of AI Use and Healthcare DataRecent Lawsuit Provides Insight on Intersection of AI Use and Healthcare Data

Recent Lawsuit Provides Insight on Intersection of AI Use and Healthcare Data

In the healthcare sector, Artificial Intelligence (AI) is changing the way hospitals, providers and insurers do business.  AI platforms offer the promise of more efficient detection, diagnosis and treatment, improved clinical workflows to increase patient time with providers, and broader reach of clinical services.  One challenge is balancing the massive amounts of clinical data needed to inform AI with individual privacy and control of sensitive information.  

Read More