FacebookRegulating Online Content – The Balance Between Free Speech and Free-For-All

Regulating Online Content – The Balance Between Free Speech and Free-For-All

This year kicked off with an explosive culmination to the ongoing tensions between free speech and social media, with Twitter bans, lawsuits and enduring questions about who gets to regulate content on the internet—or if it should be regulated at all. America is distinctly uncomfortable with the government stepping in to regulate speech. But public pressure has forced Big Tech to fill the void, spurring claims of unfair treatment and violations of First Amendment rights.

At the heart of the matter: unlike other countries that have laws against hate speech and fake news, America seems to have left it up to private companies to decide what content is acceptable with little legal obligation to explain their choices. Compounding the problem is what some argue the enormous power that a few big tech companies wield over our online infrastructure and channels of communication, leaving some to wonder if service providers like Facebook should really be treated more like a utility, with the government regulations to match. 

Are there restrictions or laws regulating online content?

In Reno v. American Civil Liberties Union, the U.S. Supreme Court declared speech on the Internet equally worthy of the First Amendment’s historical protections. That means pornography, violent films, explicit racism are all fair game on social media in the eyes of the law. The government only deems very narrow categories of speech as criminal, such as “true threats,’ or language that is explicitly intended to make an individual or group fear for their life or safety. Although it’s interesting to note that arguing a politician should be shot wouldn’t necessarily meet the criteria for incitement or true threat.

As of late, America has held tightly to an interpretation of the First Amendment that protects the free marketplace of ideas, even when it comes at a cost. Landmark cases like Brandenburg v. Ohio that protected the speech of a Ku Klux Klan leader, have solidified our particularly high bar for punishing inflammatory speech.  

But America has also supported the rights of private companies to decide what kind of speech is appropriate in their venues and by extension, virtual squares. Unlike most of the world, where ISPs are subject to state mandates, content regulation in the United States mostly occurs at the private or voluntary level. Social media companies are allowed to decide what their user policies are and are expected to self-regulate, creating internal speech policies that, in theory, protect against unfair censorship.

Beyond the social media companies themselves, the regulators and legal recourse that do exist present their own set of problems. ICANN, the non-profit that controls contracts with internet registries (.com, .org, .info, etc.) and registrars (companies that sell domain names), has immense power over who gets to claim a domain name—ICANN decisions are not subject to speech claims based on the First Amendment. The Digital Millennium Copyright Act (DMCA), designed to offer anti-piracy protections, is often used as a tool of intimidation or as a means for companies to keep a tight control on how consumers use their copyrighted works, stifling free speech in the process. Apple, for example, tried to use the DMCA in 2009 to shut down members in the online forum BluWiki who were discussing how to sync music playlists between iPods and iPhones without having to use iTunes. John Deere refuses to unlock its proprietary tractor software to let farm owners repair their own vehicles, leaving tractor owners in fear of DMCA lawsuits if they try to crack the software protections themselves.

The Growing Pressure to Regulate Content

In the absence of legal pressure, public opinion seems to be the real driver of online content regulation. It was a tipping point of public outrage that finally pushed big tech to ban the president and Parler. Apple pulled Tumblr from the App Store in 2018 because it was failing to screen out child sex abuse material, but only after multiple public complaints. After decades of proudly promoting free speech, regardless of the consequences, external pressures are now forcing companies like Facebook to police their domains, using legions of reviewers to flag harmful content.

While the world grapples with how to manage online speech, it’s clear that businesses will continue to face a variety of legal, social, and moral pressures regarding the content they provide or facilitate—and they must be prepared to monitor and account for what goes on in their virtual public spaces. Companies that allow for the posting of content – words, photos, videos – have a slew of laws to consider in allowing this practice, including free speech rights and controls. Companies should work with sophisticated and experienced tech legal counsel, like Beckage, to address these issues.

Subscribe to our newsletter.

*Attorney Advertising.  Prior results do not guarantee future outcomes.

Cyber InsuranceDFS February 2021 Guidance To Cyber Insurers

DFS February 2021 Guidance To Cyber Insurers

On February 4, 2021, the New York State Department of Financial Services (DFS) issued specific guidance to property/casualty insurers writing cyber insurance policies, known as the Cyber Insurance Risk Framework (“Framework”). The DFS promoted itself as the first US regulator in the nation to issue a specific guidance on cyber insurance, explaining the suggestions of the Framework are based on continued dialogue with the insurance industry and experts in cyber insurance regarding the shifting landscape of cybersecurity.

With the Covid-19 pandemic forcing companies to shift to an online workforce, cybercrimes, like ransomware and malware attacks, have drastically increased in frequency, severity, and cost to victimized companies. Cybercriminals use payments extorted from ransomware to fund more frequent and sophisticated ransomware attacks, emboldening them to target other organizations and widen their campaigns. The widespread use of ransomware has pressured cyber insurers to increase rates and tighten underwriting standards for cyber insurance.

The DFS advises New York regulated property/casualty insurers offering cyber insurance to establish a formal strategy for measuring cyber insurance risks that can be approved by a board or a governing entity. The Framework acknowledges that strategies should be proportionate with each insurer’s risk based on the insurer’s size, resources, geographic distribution, market share, and industries insured.  It is important to note the Framework constitutes a list of best practices and suggested approaches and does not yet constitute rules or regulations for the insurance industry.

The Cyber Insurance Risk Framework encourages cyber insurers to formalize a Cyber Insurance Risk Assessment Strategy that is managed by a governing body and establishes and/or formalizes qualitative and quantitative measures and goals for cyber risk that incorporate six best practices identified by DFS:

  1. Manage and Eliminate Exposure to “Silent” Cyber Insurance Risk

Cyber insurers should determine whether they are exposed to silent or non-affirmative cyber insurance risk, an insurer’s obligation to cover cyber incident losses under a policy that does not explicitly mention cyber incidents. The Framework suggests that insurers evaluate their silent risk exposure and take steps to minimize that exposure.

2. Evaluate Systemic Risk

Cyber insurers should conduct regular systemic risk evaluations and plan for potential losses. Increased reliance on third-party vendors has caused systemic risk to grow exponentially and thus, insurers should understand the third parties used by their insureds and model the effect of catastrophic cyber events that may result in simultaneous losses.

3. Rigorously Measure Insured Risk by Using Data

Cyber insurers should use a comprehensive, data-driven approach to assess their insured’s potential gaps and cybersecurity vulnerabilities.

4. Educate Insureds and Insurance Producers

Cyber insurers should educate their insureds and insurance producers about the value of cybersecurity measures and the need for, benefits of, and limitations of cyber insurance.

5. Obtain Cybersecurity Expertise

Cyber insurers can use strategic recruiting practices to hire employees with cybersecurity experience and invest in their training and development.

6. Require Notice to Law Enforcement

In the event of a cyberattack, cyber insurance policies should require victims notify and engage law enforcement agencies to help recover lost data and funds.

This guidance brings operational and other challenges to those in the property/casualty insurance market. It also adds new potential requirements to pass along to their insureds. For example, insureds may not know that their policy will require notification of law enforcement, and they may have reasons not to notify law enforcement, but if they choose not to it can lead to a coverage dispute.

Beckage advises those in the insurance industry on risk management, cybersecurity best practices and measures, third-party vendor management, and incident response.  Beckage also works with global clients to evaluate risk management, including opportunities to obtain various cyber and tech related coverage. We can be reached 24/7 via our data breach hotline at 844.502.9363 or IR@beckage.com.

Subscribe to our newsletter. 

*Attorney advertising – prior results do not guarantee future outcomes. 

PrivacyVirginia, Oklahoma, and Florida Join Growing List of States With Proposed Privacy Legislation

Virginia, Oklahoma, and Florida Join Growing List of States With Proposed Privacy Legislation

Since California’s Consumer Privacy Act (CCPA) was passed in 2018, Beckage has seen a slew of other states follow suit in proposing and enacting their own comprehensive data privacy bills. Most recently, lawmakers in Virginia, Oklahoma, and Florida have joined the growing list of states with proposed privacy bills. So far this year, New York, Washington, and Minnesota have also introduced legislation governing the ways companies collect, store, use, and share consumer data and we expect to see other laws emerging in the coming months with still no federal data privacy bill in sight.  

Working with experienced privacy counsel can help build out data privacy programs that stand the test of time and contemplate emerging legislation.   

Below is an overview of the Virginia and Oklahoma proposed bills, their requirements, and their potential impact on the data privacy landscape. 

Virginia Consumer Data Protection Act (SB 1392) 

The Virginia proposal is quickly moving through the Virginia state legislature and is likely to be the next comprehensive state data privacy law on the books. This bill passed the Virginia House of Delegates on January 29th by a wide margin and was unanimously approved in the Senate on February 3rd. Assuming Governor Northam signs it into law, the Virginia Consumer Data Protection Act is set to go into effect on January 1, 2023. 

Who Does It Apply To? 

Companies that conduct business in Virginia or “produce products or services that are targeted to” Virginians would have to comply with the Virginia Consumer Data Protection Act if they: 

  • Control or process the personal data of at least 100,000 Virginians; or 
  • Control or process the personal data of at least 25,000 Virginians and derive over 50% of their gross revenue from the sale of that data. 

The Legislation does provide exemptions for financial institutions governed by the Gramm-Leach-Bliley Act, entities subject to HIPAA or HITECH, non-profits, and educational institutions. 

What Is Included? 

Included in this Bill are several requirements not covered under the CCPA or any other U.S. privacy law. One such obligation requires entities that control personal data to conduct protection assessments of any activities that use personal data for specific purposes, such as targeted advertising. These data protection assessments may be requested and evaluated by the attorney general to ensure compliance. 

This Act would afford Virginia consumers with several rights regarding their personal data, including the right to opt-out of the sale or use of their information for targeted advertising or profiling. It would also allow consumers to delete their data, move their data, correct inaccuracies in their data, and confirm if their data is being processed upon request.  

Notably missing is a private right of action through which consumers could seek damages for alleged violations. Instead, enforcement of the Act would be left exclusively to the attorney general, who may seek up to $7,500 per violation. 

Oklahoma Computer Data Privacy Act (HB 1602) 

Introduced on January 19, 2021 by Representatives Josh West (R) and Collin Walke (D), this Bill has bipartisan support in the Oklahoma House of Representatives. Its intended purpose is to give Oklahomans more online privacy by taking aim at tech companies. If passed, the Oklahoma Computer Data Privacy Act would go into effect on November 1, 2021. 

Who Does It Apply To? 

If passed, this act would apply to companies that operate in the state of Oklahoma and collect Oklahoman’s personal information or have information collected on their behalf, determine the purpose for and means of processing that information, and satisfy one of the following thresholds: 

  • Has an annual gross revenue exceeding $10 million; 
  • Buys, sells, receives, or shares for commercial purposes the personal information of 50,000 or more consumers, households, or devices annually; or 
  • Derives 25% or more of their annual revenue from the sale of personal data. 

What Is Included? 

Companies subject to this legislation would be required to disclose what personal information they hold on a consumer and allow for the deletion of that information upon the consumer’s request. This proposal also mandates consumers opt-in to providing their personal data, which differentiates it from most other state privacy laws, like the CCPA. The Oklahoma Computer Data Privacy Act also differs from the CCPA in its inclusion of a broad private right of action through which Oklahoma residents could seek damages up to $7,500 for violations. 

Florida House Bill 969 (HB 969) 

Introduced on February 15th by Representative Fiona McFarland (R), House Bill 969 would place several requirements on businesses that deal with Florida residents’ private information. If passed, it would go into effect on January 1, 2022. 

Who Does It Apply To? 

For-profit companies that do business in Florida and collect personal information about consumers, have personal information collected on their behalf, or determine the process and means of processing personal information will have to comply with this Bill’s requirements if they satisfy one of the following thresholds: 

  • Has an annual gross revenue exceeding $25 million; 
  • Buys, sells, receives, or shares for commercial purposes the personal information of 50,000 or more consumers, households, or devices annually; or 
  • Derives 50% or more of their annual revenue from the sale of personal data. 

What Is Included? 

HB 969 would require that applicable businesses notify consumers about their data collection and selling practices before or at the point of data collection. Under this Bill, consumers would also have the right to request their data be disclosed, corrected, or edited and the right to opt-out of having their personal information disclosed or sold to a third party. 

Applicable businesses would be required to implement reasonable security protocols to protect their consumer’s personal data. Also included is a private right of action through which a consumer “whose nonencrypted and nonredacted personal information or e-mail addresses are subject to unauthorized access” may seek damages for violations of the Bill. The Department of Legal Affairs would be authorized to bring other enforcement actions, up to $2,500 per unintentional violation and $7,500 per intentional violation. 

Potential Impact 

Currently, the data privacy landscape in the United States is a patchwork of enacted and proposed laws, all with their own requirements and consumer rights, creating a confusing web for companies operating in more than one jurisdiction. While advocates of these state privacy laws argue for the protection of consumers’ data in an increasingly digitally-driven world, opponents argue that the potential risk of operating within states who have enacted comprehensive privacy laws may deter businesses from expanding their operations there. 

A federal privacy law that could rectify the many differences between individual state laws would simplify this landscape, making it easier for companies to protect their consumers’ data and operate efficiently while complying with regulations.  

Beckage is closely monitoring these, and other emerging privacy laws. In the meantime, companies that collect personal data should start thinking about privacy compliance by conducting a baseline privacy assessment and starting to develop relevant policies and procedures. Beckage attorneys, who are also technologists and certified privacy professionals, are happy to help counsel your business on compliance with the CCPA, GDPR, and other pending and enacted privacy legislation.  We work with clients of all sizes to build out data privacy programs and address compliance matters.  

Subscribe to our newsletter. 

*Attorney advertising – prior results do not guarantee future outcomes. 

BiometricsBipartisan Group Proposes New York Biometric Policy

Bipartisan Group Proposes New York Biometric Policy


In January of 2021, a bipartisan group of New York State lawmakers proposed a comprehensive policy that places restrictions on the collection of biometric information by companies operating in the state. Assembly Bill 27, the Biometric Privacy Act, would allow for consumers to sue companies that improperly use or retain an individual’s biometric information. New York’s biometric act follows suit behind Illinois’ Biometric Information Privacy Act (BIPA), the first and most robust state law that guards against the unlawful collection and storing of biometric information. Like BIPA, Assembly Bill 27 was created to place regulations on a company’s handling of biometric data, such as fingerprints, voiceprints, retina scans, and scans of the hand and face geometry. Assembly Bill 27, however, does not cover writing samples, written signatures, photographs, or physical descriptions.

What Is Included?

The Biometric Privacy Act requires businesses collecting biometric identifiers or information to develop a written policy establishing a retention schedule and guidelines for permanently destroying the biometric data. The destruction of the data must occur when the initial purpose for collecting the biometric data has been “satisfied,” or within three years of the individual’s last interaction with the company, whichever occurs first. This bill also includes a private right of action that would allow consumers to sue businesses for statutory damages up to $1000 for each negligent violation and $5,000 for each intentional or reckless violation.

Further, AB 27 requires companies to obtain written consent from individuals before collecting, purchasing, or obtaining biometric information and provide notification to those individuals about the specific purpose and length of time the data will collected, stored, and used. Companies are prohibited from selling, leasing, trading, and profiting from biometric information and strict restraints are placed on a business’s ability to disclose biometric information to a third party without consumer consent.

The Impact of Biometrics on Future Legislation

With the increased volume of biometric information being used by companies leveraging biometric-driven timekeeping systems and other technologies, the push for biometric privacy policies that govern the use of these technologies and promotes safeguards for employees is gaining momentum. Several states are also looking to amend their breach notification and security laws to include biometric identifiers. For example, New York State’s SHIELD Act, the breach notification law enacted in 2019, has already been expanded to include biometric data in its definition of private information.

At Beckage, we have a team of highly skilled lawyers that stay up to date on proposed and enacted legislation. With states looking to implement biometric privacy laws similar to BIPA, it is important to have legal tech counsel to address compliance with these emerging laws. Our team can help assist your company in assessing and mitigating risks associated with emerging technologies.

Subscribe to our newsletter.

*Attorney Advertising. Prior results do not guarantee similar outcomes. *

Identity TheftEleventh Circuit Adds to Circuit Split on Whether Future Risk of ID Theft Can Support Data Breach Class Claims

Eleventh Circuit Adds to Circuit Split on Whether Future Risk of ID Theft Can Support Data Breach Class Claims

Courts across the United States continue to struggle with whether individuals impacted by a company’s data breach have suffered harm that is concrete enough to support their claims in court. 

After they are notified of a data breach involving their personal data, impacted individuals often join together to bring class action claims against the business for its alleged failure to safeguard their data, breach of privacy promises regarding that data, and under applicable state consumer laws.

Data Breach Class Actions & Standing Requirements

One area that courts have shown a willingness to scrutinize is the question of whether these individuals have alleged, or can show they have experienced, actual harm from the data incident, to satisfy the Constitutional Article III requirement known as standing. 

Plaintiffs continue to present novel theories of why access to their data by an unauthorized third party harmed them in a way that a court may remedy, especially in instances where no facts exist to show that their data has actually been misused.  Plaintiffs will often allege that they lost some value associated with their data, or associated with the use of their data.  By far the most prominent theory submitted by data breach plaintiffs is that these individuals are now at a higher risk of future identity theft and that future relief, such as credit monitoring, should be offered to them to prevent against this risk.

But how great is this risk of future identity theft, really? According to a recent Eleventh Circuit decision, not substantial enough to support Article III standing.

The I Tan Tsao Decision

In affirming the dismissal of a customer’s proposed class action against Florida-based fast-food chain, PDQ, over a data breach that allegedly exposed plaintiffs’ credit and debit card information, the Eleventh Circuit held that the plaintiff I Tan Tsao did not present a sufficient injury claim as a basis for bringing the suit.  There, Mr. Tsao alleged that he and members of his class were at an elevated risk of future identity theft due to the restaurant chain’s breach, and that he had to take certain mitigative steps to reduce this risk, such as cancelling his credit cards.  Plaintiff Tsao relied primarily on a 2007 GAO Report on Data Breaches in support of his theory.

The Eleventh Circuit did not find Mr. Tsao’s hypothetical future risk of identity theft compelling enough for Article III standing purposes.

“We hold that Tsao lacks Article III standing because he cannot demonstrate that there is a substantial risk of future identity theft — or that identity theft is certainly impending — and because he cannot manufacture standing by incurring costs in anticipation of non-imminent harm,” the three-judge panel said.

In relying on the U.S. Supreme Court’s decision in Clapper v. Amnesty International USA, the Eleventh Circuit concluded that a plaintiff alleging a hypothetical harm does not have standing unless that harm is either “certainly impending” or represents a “substantial risk” of harm.  And if the alleged risk does not rise to those levels, a plaintiff cannot “conjure standing by inflicting some direct harm on itself to mitigate a perceived risk.”

The Eleventh Circuit also rejected Mr. Tsao’s use of the GAO Report, holding that the Report’s findings actually supported that the limited data potentially exposed here – credit and debit card numbers – alone, did not lead to a higher incidence of future identity theft.

Nor could Mr. Tsao’s mitigative steps – to cancel his credit card, which he alleged led to a period of restricted access to his account and lost reward points – manufacture a harm for standing purposes.  “It is well established that plaintiffs cannot manufacture standing merely by inflicting harm on themselves based on their fears of hypothetical future harm that is not certainly impending” the Circuit court held, citing to Clapper.

The Court’s decision in I Tan Tsao v. Capitva MVP Restaurant Partners LLC aligns it with the Second, Third, Fourth and Eighth Circuit Courts of Appeal who have rejected the theory, while the Sixth, Seventh, Ninth and D.C. circuits have accepted it.

The Supreme Court has yet to hear an Article III standing case in the data breach context, leading legal spectators to wonder if the I Tan Tsao decision now presents the high Court with an opportunity to provide such guidance.

Beckage is monitoring developments in this case and other data breach class actions that may provide guidance for future litigation.  Our Litigation team has worked on some of the largest data breach and privacy class actions in the country and can help your business develop a litigation strategy that will result in a successful outcome and minimal disruption to your everyday work.  Learn more about our Litigation Practice Group here.

Subscribe to our newsletter.

*Attorney advertising. Prior results do not guarantee future outcomes.

1 2 3 18