Quantcast
Channel: January 23, 2024 – Techdirt
Viewing all articles
Browse latest Browse all 8

Was The NO AI FRAUD Act Written By A Fraudulent AI? Because Whoever Wrote It Is Hallucinating

$
0
0

A couple of weeks ago, a friend sent me Rep. Maria Elvira Salazar’s and Rep. Madeleine Dean’s proposed NO AI FRAUD Act, which purports to “protect Americans’ individual right to their likeness and voice against AI-generated fakes and forgeries.”

Now, there are legitimate concerns about uses of generative AI tools to create fake images, videos, audio, etc, but most of the fears to date have been pretty overblown. As we noted a few years back, it seems that most of those concerns can be handled directly. It’s possible that with these tools getting better and more accessible all the time, that things could change, but it’s no reason to rush through poorly thought out laws that take a sledge hammer to some pretty fundamental principles.

While the bill’s framing suggests it’s focused on generative AI, it’s really a giant, cumbersome federal publicity rights law. For nearly 15 years we’ve been talking about the problems of publicity rights, which has basically created a newish chaotic right that people have been abusing left and right to silence speech. The laws are all state based (currently) and thus vary by state, but there’s been a concerted effort to lift publicity rights up to the federal level and to enshrine it in the fictional bucket of “intellectual property” alongside copyright, patents, and trademarks.

But that’s not the point of publicity rights. They were initially designed for a pretty straightforward and reasonable purpose: to stop false claims of endorsement. That is, you can’t put Famous Hollywood Actor in your ads looking like they endorse your product unless Famous Hollywood Actor has agreed to it.

That’s reasonable.

But where it goes off the rails is where some people use those laws to silence things that have nothing to do with false or misleading endorsements in commercial ads. You have celebrities suing, saying you can’t tweet pictures of them, for example. Or the owner of a horse demanding a cut of a prize winning photo, because the horse was in it. Or maybe you have former central American strongmen suing video games for including a character based on him.

The list goes on and on and on. There are all sorts of cases where it makes perfect sense to be able to use a likeness of someone, even without their approval. We see it all the time in TV and movie shows representing real life events. The makers of those shouldn’t have to license everyone’s likeness. Or you see it in parodies on TV and online. Saturday Night Live regularly parodies real people. They should never need permission and a license to do so.

Unfortunately, the NO AI FRAUD Act doesn’t seem to understand any of that. It creates a very, very broad federal publicity right (which it officially deems an “intellectual property” right — thereby exempting it entirely from Section 230) with no consideration of just how broad and problematic the law is. It also extends publicity rights past death (something most state publicity rights laws don’t do, and the ones that do have found it to be a total mess).

As Elizabeth Nolan Brown at Reason notes, this law is just ridiculously broad:

So just how broad is this bill? For starters, it applies to the voices and depictions of all human beings “living or dead.” And it defines digital depiction as any “replica, imitation, or approximation of the likeness of an individual that is created or altered in whole or part using digital technology.” Likeness means any “actual or simulated image… regardless of the means of creation, that is readily identifiable as the individual.” Digital voice replica is defined as any “audio rendering that is created or altered in whole or part using digital technology and is fixed in a sound recording or audiovisual work which includes replications, imitations, or approximations of an individual that the individual did not actually perform.” This includes “the actual voice or a simulation of the voice of an individual, whether recorded or generated by computer, artificial intelligence, algorithm, or other digital means, technology, service, or device.”

These definitions go way beyond using AI to create a fraudulent ad endorsement or musical recording.

They’re broad enough to include reenactments in a true-crime show, a parody TikTok account, or depictions of a historical figure in a movie.

They’re broad enough to include sketch-comedy skits, political cartoons, or those Dark Brandon memes.

They’re broad enough to encompass you using your phone to record an impression of President Joe Biden and posting this online, or a cartoon like South Park or Family Guy including a depiction of a celebrity.

And it doesn’t matter if the intent is not to trick anyone. The bill says that it’s no defense to inform audiences that a depiction “was unauthorized or that the individual rights owner did not participate in the creation, development, distribution, or dissemination of the unauthorized digital depiction, digital voice replica, or personalized closing service.”

And because the law also covers intermediaries (and gets itself exempted from Section 230 by declaring the rights to be “intellectual property”), it means that people will be able to sue Meta or YouTube or whoever for merely hosting such content.

EFF has put out a warning about how terrible the bill is.

First, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that category—from pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. If it involved recording or portraying a human, it’s probably covered. Even more absurdly, it characterizes any tool that has a primary purpose of producing digital depictions of particular people as a “personalized cloning service.” Our iPhones are many things, but even Tim Cook would likely be surprised to know he’s selling a “cloning service.”

Second, it characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs. Section 230 immunity does not apply to federal IP claims, so performers (and anyone else who falls under the statute) will have free rein to sue anyone that hosts or transmits AI-generated content.

That, in turn, is bad news for almost everyone—including performers. If this law were enacted, all kinds of platforms and services could very well fear reprisal simply for hosting images or depictions of people—or any of the rest of the broad types of “likenesses” this law covers. Keep in mind that many of these service won’t be in a good position to know whether AI was involved in the generation of a video clip, song, etc., nor will they have the resources to pay lawyers to fight back against improper claims. The best way for them to avoid that liability would be to aggressively filter user-generated content, or refuse to support it at all.

Now, I’ve seen some defenders of the bill push back on the concerns people have raised by saying that it won’t be that bad because the bill contains a weird “First Amendment Defense” section that says you can use the 1st Amendment as a defense against claims in the bill.

But… the 1st Amendment can always be used in defense of an attempt to suppress speech, so there’s no reason to put this clause into the bill. That is, unless, it’s actually designed to really limit the 1st Amendment defenses. And that’s exactly what’s happening here. Again, the EFF explains:

Lastly, while the defenders of the bill incorrectly claim it will protect free expression, the text of the bill suggests otherwise. True, the bill recognizes a “First Amendment defense.” But every law that affects speech is limited by the First Amendment—that’s how the Constitution works. And the bill actually tries to limit those important First Amendment protections by requiring courts to balance any First Amendment interests “against the intellectual property interest in the voice or likeness.” That balancing test must consider whether the use is commercial, necessary for a “primary expressive purpose,” and harms the individual’s licensing market. This seems to be an effort to import a cramped version of copyright’s fair use doctrine as a substitute for the rigorous scrutiny and analysis the First Amendment (and even the Copyright Act) requires.

In other words, it’s a “First Amendment*” defense, where the First Amendment doesn’t really mean the First Amendment.

It really feels like this bill was written by people who got completely sucked into a moral panic about AI-generated deep fakes, but with absolutely no understanding or knowledge of free speech or publicity rights issues.


Viewing all articles
Browse latest Browse all 8

Trending Articles