Jordan FischerJordan L. Fischer Named A 2021 Super Lawyers Rising Star For Third Year In A Row

Jordan L. Fischer Named A 2021 Super Lawyers Rising Star For Third Year In A Row

Jordan L. Fischer, Esq., has been named to the 2021 Pennsylvania Rising Stars list for outstanding lawyers 40 years old or younger or in practice for 10 years or less. This is her third straight year, appearing on this list. Each year, no more than 2.5 percent of the lawyers in the state are selected by the research team at Super Lawyers to receive this honor.

Fischer leads Beckage’s Global Privacy team where she represents clients in cross-border data management, creating cost-effective and business-oriented approaches to cybersecurity, data privacy and technology compliance. She practices in several jurisdictions throughout the United States in both state and federal courts, as well as internationally in both Europe and Asia.

At Beckage, she provides counsel to clients on a wide variety of regulatory requirements, including the General Data Protection Regulation and implementing member state law, the California Consumer Privacy Act, the Fair Credit Reporting Act, the Driver’s Privacy Protection Act, biometric data laws, global data breach standards, and federal and state unfair business practices acts. She also provides counsel on a variety of security and privacy frameworks, including the International Standards Organization 27001 and 27701, the National Institute of Standards and Technology cyber and privacy frameworks, and the Payment Credit Card Industry Data Security Standard.

Super Lawyers, a Thomson Reuters business, is a rating service of outstanding lawyers from more than 70 practice areas who have attained a high degree of peer recognition and professional achievement. The annual selections are made using a patented multiphase process that includes a statewide survey of lawyers, an independent research evaluation of candidates and peer reviews by practice area. The result is a credible, comprehensive, and diverse listing of exceptional attorneys.

About Beckage
Beckage is a women-owned law firm that focuses on technology, data security, and privacy. Our attorneys counsel clients on matters pertaining to data security and privacy compliance, litigation and class action defense, incident response, government investigations, technology intellectual property, and emerging technologies such as Artificial Intelligence (AI), digital currencies, Internet of Things (IoT) devices, and 5G networks. Beckage has offices from California to New York. Learn more at Beckage.com

                                                                               ###

Contact: Brendan Kennedy
bkennedy@beckage.com
845.216.8194

BiometricsIn the Face of Huge Settlements, BIPA May Soon Be Losing Its Bite

In the Face of Huge Settlements, BIPA May Soon Be Losing Its Bite

Illinois lawmakers are considering a bill which has the potential to dramatically rein in the state’s strict Biometric Information Privacy Act (“BIPA”).  On March 9, 2021, the Illinois House judiciary committee advanced House Bill 559 (the “Bill”) which would amend BIPA.  The Bill has a couple of key amendments that may impact your business.

First, the Bill changes BIPA’s “written release” requirement to instead simply require “written consent”.  Thus, under the Bill, businesses would no longer be required obtain written release, but instead could rely on electronic consent.

Second, whereas BIPA currently requires that a business in possession of biometric identifiers draft and provide a written policy regarding its handling of biometric data to the general public, under the Bill, businesses would only be required to provide this written policy to affected data subjects.

Third, the Bill creates a one-year statute of limitations for BIPA claims.  Moreover, the Bill provides that prior to initiating a claim, a data subject must provide a business with 30 days’ written notice identifying the alleged violations.  If the business cures these violations within the 30 day window, and provides the data subject an express written statement indicating the issues have been corrected and that no further violations shall occur, then no action for individual statutory damages or class-wide statutory damages can be taken against the business.  If the business continues to violate BIPA in breach of the express written statement, then the data subject can initiate an action against the business to enforce the written statement and may pursue statutory damages.  Therefore, not only does the Bill finally create a statute of limitations, but also provides a mechanism by which businesses can respond to alleged violations of BIPA prior to engaging in costly litigation.

Fourth, the Bill modifies BIPA’s damages provisions.  Currently BIPA provides that prevailing plaintiff is entitled liquidated damages of $1,000 or actual damages, whichever is greater, when a business is found to have negligently violated BIPA.  The Bill would limit a prevailing plaintiff’s recovery to only actual damages.  Similarly, in its current form, BIPA provides that a prevailing plaintiff is entitled to liquidated damages of $5,000 or actual damages, whichever is greater, when a business is found to have willfully violated BIPA.  The Bill would limit a prevailing plaintiff’s recovery to actual damages plus liquidated damages up to the amount of actual damages.  Therefore, the Bill would limit a businesses exposure in BIPA claims to what a prevailing Plaintiff can demonstrate as actual damages.

Finally, the Bill provides that BIPA would not apply to a business’ employees if the those employees were covered by a collective bargaining agreement.  Something which has been at issue in recent BIPA litigation as discussed here.

BIPA litigation has increased dramatically and resulted in a number of recent high-profile settlements, including TikTok’s $92 million dollar settlement and Facebook’s $650 million dollar settlement.  This Bill has the potential to greatly curtail this spiral of litigation and high settlement figures.  Beckage will continue to monitor any developments regarding the Bill and will update its guidance accordingly.  Our team of experienced attorneys, who are also devoted technologists, are especially equipped with the skills and experience necessary to not only develop a comprehensive and scalable biometric privacy compliance program but also handle any resulting litigation.

Subscribe to our newsletter.

*Attorney Advertising.  Prior results do not guarantee future outcomes.

5GWith 5G, will your thermometer need malware protection?

With 5G, will your thermometer need malware protection?

5G is perhaps the biggest critical infrastructure build the world has seen in twenty-five years.  It will allow for the connection of millions of Internet of Things (“IoT’) devices.  However, with these added benefits comes related vulnerabilities and cybersecurity risks. 

What are the specific cybersecurity risks are associated with the 5G network?

First, the 5G network itself can pose many security risks.  The 5G infrastructure is built using many components, each of which may be corrupted through an insecure supply chain.  Significantly more software is being used allowing for more entry points and more potential vulnerabilities.  Similarly, more hardware devices are required (cell towers, beamforming devices, small cells, etc.), and each one of these hardware devices must be adequately secured.  Small, local cells may be more physically accessible and therefore subject to physical attack.  Further, 5G will be built, in part, on legacy 4G LTE components – which themselves can have vulnerabilities.

Second, with specific focus on IoT devices, cybersecurity protections will need to become much more granular and more capable of being deployed on less intelligent “Things.”  Historically, one could think of a Thing as a device that can be connected to a network, but which lacked sufficient processing power to handle more advanced computations.  Things are “dumb.”  By connecting a processor, we could make such dumb Things “smart.”  These new smart IoT devices are interesting vectors of attack by malicious actors and further confound overall cybersecurity programs.  The ability to detect a cyber attack on a light bulb will require additional cybersecurity solutions.

Finally, with 5G facilitating the implementation of more IoT devices, more sensitive data may be stored requiring the need to protect edge computers servicing the IoT device.  If we consider the ubiquity of thermometer scanning now and how those and similar IoT devices could easily become part of 5G, then we begin to understand the seemingly exponential possibility for threat vectors on our networks.  We may have sensitive data (Am I sick?  What time do I show up for work?) and we may have the concern that a malicious actor may look to infect a network through a Thing. Will thermometers need malware protection?  More devices arguably allow for more places for a hacker to attempt to attack and thus the possibility of a greater availability of distributed denial of service (DDOS) attacks.  There were reports of Things being used collectively to deny service with the LTE network.  With 5G, the concept of an army of coffee makers attacking by all issuing a request to an address will become a greater possibility and manufacturers could be liable to other parties if their insecure Things are used to deny the service of someone else.

Regardless of the attack vector, incident response practices are universal, and Beckage’s Incident Response Team can help prepare your team from IoT and other attacks.

What potential solutions are available to mitigate this risk?

Companies looking to incorporate 5G should partner with experienced tech counsel who can assist by reviewing contracts, conducting risk assessments, and evaluating and updating incident response plans and procedures to account for any additional risks associated with 5G.

In addition, there are already some attempts at governmental solutions.  In March 2020, President Trump issued a National Strategy to Secure 5G – requiring, in relevant part, that the Unites States must identify cybersecurity risks in 5G.

The CISA (Cybersecurity & Infrastructure Security Agency) also issued some documents relating to the security of 5G.  Similarly, we are seeing a push for international standards and certain untrusted companies have had their products banned from use.  The Federal government is using regulations to limit the adoption of equipment that may contain vulnerabilities.

So, what is the solution?  The same as always.  Innovation.  Businesses are encouraged to develop trusted solutions and innovation in this space.  Advanced cybersecurity monitoring and protection by design will continue to be needed.

The Beckage Team of lawyers, who are also technologists, is well-versed in new and emerging technologies and works with clients to facilitate innovation through the use of IP protections.  We also assist companies in the implementation new technologies, like 5G, taking into consideration the cybersecurity, data privacy, and regulatory obstacles associated with their use.  From patent acquisition to policy drafting and review, Beckage attorneys are here to help your company capitalize on innovation.

*Attorney Advertising. Prior results do not guarantee future outcomes. 

Subscribe to our Newsletter

AIAccountability and the Use of Artificial Intelligence

Accountability and the Use of Artificial Intelligence

As artificial intelligence (“AI”) and automated decision-making systems make their way into every corner of society – from businesses and schools to government agencies – concerns about using the technology responsibly and accountability are on the rise. 

The United States has always been on the forefront of technological innovations and our government policies have helped us remain there.  To that end, on February 11, 2019, President Trump issued an Executive Order on Maintaining American Leadership in Artificial Intelligence (No. 13,859).  See Exec. Order No. 13,859, 3 C.F.R. 3967.  As part of this Executive Order, the “American AI Initiative” was launched with five guiding principles:

  1. Driving technological breakthroughs; 
  2. Driving the development of appropriate technical standards; 
  3. Training workers with the skills to develop and apply AI technologies; 
  4. Protecting American values, including civil liberties and privacy, and fostering public trust and confidence in AI technologies; and
  5.  Protecting U.S. technological advantages in AI, while promoting an international environment that supports innovation. Id. at § 1. 

Finally, the Executive Order tasked the National Institute of Standards and Technology (“NIST”) of the U.S. Department of Commerce with creating a plan for the development of technical standards to support reliable, robust, and trustworthy AI systems.  Id. at § 6(d). To that end, the NIST released its Plan for Federal Engagement in Developing Technical Standards in August 2019.  See Nat’l Inst. of Standards & Tech., U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools (2019). 

While excitement over the use of AI was brewing in the executive branch, the legislative branch was concerned with its accountability as on April 10, 2019, the Algorithmic Accountability Act (“AAA”) was introduced into Congress.  See Algorithmic Accountability Act of 2019, S. 1108, H.R. 2231, 116th Cong. (2019).  The AAA covered business that: 

  1. Made more than $50,000,000 per year;
  2. Held data for greater than 1,000,000 customers; or
  3. Acted as a data broker to buy and sell personal information.  Id. at § 2(5). 

The AAA would have required business to conduct “impact assessments” on their “high-risk” automated decision systems in order to evaluate the impacts of the system’s design process and training data on “accuracy, fairness, bias, discrimination, privacy, and security”.  Id. at §§ 2(2) and 3(b).  These impact assessments would have required to be performed “in consultation with external third parties, including independent auditors and independent technology experts”.  Id. at § 3(b)(1)(C).  Following an impact assessment the AAA would have required that business reasonably address the result of the impact assessment in a timely manner.  Id. at § 3(b)(1)(D).  

It wasn’t just the federal government who is concerned about the use of AI in business as on May 20, 2019, the New Jersey Algorithmic Accountability Act (“NJ AAA”) was introduced into the New Jersey General Assembly.  The NJ AAA was very similar to the AAA in that it would have required businesses in the state to conduct impact assessments on “high risk” automated decisions. See New Jersey Algorithmic Accountability Act, A.B. 5430, 218th Leg., 2019 Reg. Sess. (N.J. 2019).  These “Automated decision system impact assessments” would have required an evaluation of the systems development “including the design and training data of the  automated  decision  system,  for  impacts  on accuracy,  fairness,  bias,  discrimination,  privacy,  and  security” as well as a cost-benefit analysis of the AI in light of its purpose.  Id. at § 2.  The NJ AAA would have also required businesses work with independent third parties, record any bias or threat to the security of consumers’ personally identifiable information discovered through the impact assessments, and provide any other information that is required by the New Jersey Director of the Division of Consumer Affairs in the New Jersey Department of Law and Public Safety.  Id

While the aforementioned legislation has appeared to have stalled, we nevertheless anticipate that both federal and state legislators will once again take up the task of both encouraging and regulating the use of AI in business as the COVID-19 pandemic subsides.  Our team at Beckage contains attorneys who are focused on technology, data security, and privacy and have the experience to advise your business on the best practices for the adoption of AI and automated decision-making systems. 

*Attorney Advertising. Prior results do not guarantee future outcomes. 

Subscribe to our Newsletter

FacebookRegulating Online Content – The Balance Between Free Speech and Free-For-All

Regulating Online Content – The Balance Between Free Speech and Free-For-All

This year kicked off with an explosive culmination to the ongoing tensions between free speech and social media, with Twitter bans, lawsuits and enduring questions about who gets to regulate content on the internet—or if it should be regulated at all. America is distinctly uncomfortable with the government stepping in to regulate speech. But public pressure has forced Big Tech to fill the void, spurring claims of unfair treatment and violations of First Amendment rights.

At the heart of the matter: unlike other countries that have laws against hate speech and fake news, America seems to have left it up to private companies to decide what content is acceptable with little legal obligation to explain their choices. Compounding the problem is what some argue the enormous power that a few big tech companies wield over our online infrastructure and channels of communication, leaving some to wonder if service providers like Facebook should really be treated more like a utility, with the government regulations to match. 

Are there restrictions or laws regulating online content?

In Reno v. American Civil Liberties Union, the U.S. Supreme Court declared speech on the Internet equally worthy of the First Amendment’s historical protections. That means pornography, violent films, explicit racism are all fair game on social media in the eyes of the law. The government only deems very narrow categories of speech as criminal, such as “true threats,’ or language that is explicitly intended to make an individual or group fear for their life or safety. Although it’s interesting to note that arguing a politician should be shot wouldn’t necessarily meet the criteria for incitement or true threat.

As of late, America has held tightly to an interpretation of the First Amendment that protects the free marketplace of ideas, even when it comes at a cost. Landmark cases like Brandenburg v. Ohio that protected the speech of a Ku Klux Klan leader, have solidified our particularly high bar for punishing inflammatory speech.  

But America has also supported the rights of private companies to decide what kind of speech is appropriate in their venues and by extension, virtual squares. Unlike most of the world, where ISPs are subject to state mandates, content regulation in the United States mostly occurs at the private or voluntary level. Social media companies are allowed to decide what their user policies are and are expected to self-regulate, creating internal speech policies that, in theory, protect against unfair censorship.

Beyond the social media companies themselves, the regulators and legal recourse that do exist present their own set of problems. ICANN, the non-profit that controls contracts with internet registries (.com, .org, .info, etc.) and registrars (companies that sell domain names), has immense power over who gets to claim a domain name—ICANN decisions are not subject to speech claims based on the First Amendment. The Digital Millennium Copyright Act (DMCA), designed to offer anti-piracy protections, is often used as a tool of intimidation or as a means for companies to keep a tight control on how consumers use their copyrighted works, stifling free speech in the process. Apple, for example, tried to use the DMCA in 2009 to shut down members in the online forum BluWiki who were discussing how to sync music playlists between iPods and iPhones without having to use iTunes. John Deere refuses to unlock its proprietary tractor software to let farm owners repair their own vehicles, leaving tractor owners in fear of DMCA lawsuits if they try to crack the software protections themselves.

The Growing Pressure to Regulate Content

In the absence of legal pressure, public opinion seems to be the real driver of online content regulation. It was a tipping point of public outrage that finally pushed big tech to ban the president and Parler. Apple pulled Tumblr from the App Store in 2018 because it was failing to screen out child sex abuse material, but only after multiple public complaints. After decades of proudly promoting free speech, regardless of the consequences, external pressures are now forcing companies like Facebook to police their domains, using legions of reviewers to flag harmful content.

While the world grapples with how to manage online speech, it’s clear that businesses will continue to face a variety of legal, social, and moral pressures regarding the content they provide or facilitate—and they must be prepared to monitor and account for what goes on in their virtual public spaces. Companies that allow for the posting of content – words, photos, videos – have a slew of laws to consider in allowing this practice, including free speech rights and controls. Companies should work with sophisticated and experienced tech legal counsel, like Beckage, to address these issues.

Subscribe to our newsletter.

*Attorney Advertising.  Prior results do not guarantee future outcomes.

1 2 3 4