Disinformation is the deliberate spreading of false information about individuals or businesses to influence public perceptions about people and entities. Computers that manipulate the media, known as deep fakes, advance the dangers of influenced perceptions. Deep fakes can be photos, videos, audio, and text manipulated by artificial intelligence (AI) to portray known persons acting or speaking in an embarrassing or incriminating way. With the advancements of deep fakes becoming more believable and easier to produce, disinformation is spreading at alarming rates. Some risks that arise with disinformation include:
· Damage to Reputation
Reputational damage targets companies of all sizes with rumors, exaggerations, and lies that harm the reputation of the business for economic strategy and gain. Remedying reputational damage may require large sums of money, time, and other resources to prove the media was forged.
· Blackmail and Harassment
Photos, audio, and text manipulated by AI can be used to embarrass or extort business leaders, politicians, or public figures through the media.
· Social Engineering and Fraud
Deep fakes can be used to impersonate corporate executives’ identities and facilitate fraudulent wire transfers. These tactics are a new variation of Business E-mail Compromise (BEC), traditionally considered access to an employee or business associate’s email account by an impersonator with the intent to trick companies, employees, or partners into sending money to the infiltrator.
· Credential Theft and Cybersecurity Attacks
Hackers can also use sophisticated impersonation and social engineering to gain informational technology credentials through unknowing employees. After gaining access, the hacker can steal company data and personally identifiable information or infect the company’s system with malware or ransomware.
· Fraudulent Insurance Claims
Insurance companies rely on digital graphics to settle claims, but photographs are becoming less reliable as evidence because they are easy to manipulate with AI. Insurance companies will need to modify policies, training, practices, and compliance programs to mitigate risk and avoid fraud.
· Market Manipulation
Another way scammers seek to profit from disinformation is through the use of fake news reports and social media schemes using phony text and graphics to impact financial markets. Traders who use social post and headline-driven algorithms to make market decisions may find themselves prey to these types of schemes. As accessibility to realistic but manipulated video and audio increases, these misperceptions and disinformation will become substantially more believable and difficult to correct.
· Falsified Court Evidence
Deep fakes also pose a threat to the authenticity of media evidence presented to the court. If falsified video and audio files are entered as evidence, they have the potential to trick jurors and impact case outcomes. Moving forward, courts will need to be trained to scrutinize potentially manipulated media.
· Cybersecurity Insurance
Cybersecurity insurance helps cover businesses from financial ruin but has not historically covered damages due to disinformation. Private brands, businesses, and corporations should consider supplementing their current insurance policies to address disinformation to help protect themselves from risk.
There are legal avenues that can be pursued in responding to disinformation. Deep fakes that falsely depict individuals in a demeaning or embarrassing way are subject to laws regarding defamation, trade libel, false light, violation of right of publicity, or intentional infliction of emotional distress if the deep fake contains the image, voice, or likeness of a public figure.
Apart from understanding the risks associated with disinformation, companies can work to protect themselves from disinformation and deep fakes by:
1. Engaging in social listening to understand how a company’s brand is viewed by the public.
2. Assessing the risks associated with the business’ employed practices.
3. Registering the business trademark to have the protection of federal laws.
4. Having an effective incident response plan in the event of disinformation, deep fakes, or data breach to mitigate costs and prevent further loss or damage.
5. Communicating with social media platforms in which disinformation is being spread.
6. Speaking directly to the public, the media, and their customers via social media or other means.
7. Bringing a lawsuit into court if a business is being defamed or the market is manipulated.
What To Do When Facing Disinformation
If a business is facing disinformation, sophisticated tech lawyers can assist in determining rights and technological solutions to mitigate harm. Businesses are not defenseless in the face of disinformation and deep fakes but should expand their protective measures to mitigate the risks associated.
Beckage is a team of skillful technology attorneys who can help you protect your company from cyber attacks and defamation cause by disinformation and deep fakes. Our team of certified privacy professionals and lawyers can help you navigate the legal scope of the expanding field of disinformation.
*Attorney Advertising. Prior results do not guarantee similar outcomes.*