The Copyright Office’s latest report is a real slap in the face for Michael Moore–and everyone who ever posted a painstakingly executed remix, or investigative story, or delicious satire and watched it disappear under Content ID or a takedown request.
Is the deal still working?
Back in the dark ages of 1998, copyright industries and tech companies struck a deal: This weird, new “Information Superhighway” thing—let’s make it safe for tech to develop without screwing copyright holders. In the Digital Millennium Copyright Act, the budding Internet service providers wouldn’t be held liable for copyright-infringing material posted to their sites by users. But the Internet sites would have to quickly take down any infringing material that rightsholders found on those sites. Users could issue a counter-takedown request.
Censored by bot.
Over time, more and more of the takedown process was automated, and Google and others even built pre-emptive systems like Content ID to detect and take down material before rightsholders complained.
Many users who employ fair use—the right to re-use copyrighted material for free for new purposes and in appropriate amounts—to make their work have found that those automated processes can’t figure out what’s infringing and what’s fair use. Their work gets censored by bots. Many users are also afraid or unclear what the consequences of filing a counter-takedown will be.
Censored by design.
Users have also found that some copyright holders weaponize the DMCA. They use takedowns not because the work violates copyright, but because they just don’t like what the user is saying.
In the Copyright Office’s eyes, this doesn’t seem like much of a problem. Indeed, the report argues that even having to consider fair use before issuing a takedown is way too much to ask of a rightsholder, even though the court decision Lenz v. Universal said it wasn’t. (The Copyright Office simply disagrees with the judges; in fact, it doesn’t seem to like court decisions that go against rightsholders generally.)
Michael Moore cries foul.
Michael Moore, executive producer of Jeff Gibbs’ film Planet of the Humans, is the latest to taste DMCA-sponsored private censorship. The controversial film trash-talks “green energy,” and has raised the hackles of many across the political spectrum. But one person, Toby Smith–a photographer who has four unlicensed seconds of his work included in the film (Gibbs claims it’s fair use, although that hasn’t been tested)–filed a takedown notice, and now the post has been taken down.
Smith also called the film “bull-shit,” and argued that the re-use of the material was not fair use because the message of his film was distorted. He used the takedown rather than demanding payment from the filmmakers, he told The Guardian, because “I don’t agree with [the documentary’s] message and I don’t like the misleading use of facts in its narrative.” Openly admitting you’re using copyright to censor someone’s message is bold.
How common is censorship by copyright?
Moore is loudly protesting the takedown as censorship. (He issued a counter-takedown and made the film free on Vimeo; it’s also been mirrored by other sites on YouTube).
But he’s not unusual. Stories of censorship-by-copyright are near-routine. The Wall St. Journal has just reported finding hundreds of stories that Google removed from its newsfeed, under censorship-by-copyright. The journalists found that
“individuals or companies, often using apparently fake identities, caused the Alphabet Inc. unit to remove links to unfavorable articles and blog posts that alleged wrongdoing by convicted criminals, foreign officials and businesspeople in the U.S. and abroad. Google took them down in response to copyright complaints, many of which appear to be bogus.”
And then there’s just the stupidity of algorithms that can’t tell what’s fair use. Just last week, a law professor, Blake Reid, made a wry, copyright-commenting video on how to make a ketchup cake, and used Apple royalty-free music in the background. He got slapped with a takedown, because algorithms had identified similarities to other work….that had also incorporated the royalty-free music into their work.
And think about the nightmare that classical musicians endure daily. They play works in the public domain, although performances of it may be copyrighted. Algorithms can’t tell the difference between their performance and someone else’s, and it can be hard to convince the automated counter-takedown process to take them seriously. Meanwhile, under Covid-19 restrictions, some of them are depending on revenue from their performances. Ouch.
Good law, bad law.
The law of fair use is robust. Since 1990, the notion of “transformativeness”—having a different reason to use the material, in order to make something new—has become a central concept, as Peter Jaszi and I show in Reclaiming Fair Use. Fair use cases are rare but pretty consistent in the way they are decided.
Most recently, a Netflix film, Burlesque: Heart of the Glitter Tribe, incorporated a few seconds of a children’s song that had been captured in a burlesque act. The rightsholders actually sued (an extremely rare event), and the judge wasted no time telling them they had wasted their time and money. He didn’t even let the case come to trial; he dismissed the case “with prejudice,” meaning the plaintiffs couldn’t fix their complaint and make another pass at the court.
But what good is a good law, if another law undermines it? The DMCA’s Section 512 currently invites private censorship—something that should be unconstitutional. It would mean that copyright, which is in the Constitution, would violate the First Amendment, which says government can make no law inhibiting freedom of speech.
This is the part of the DMCA that the Copyright Office has decided does not go far enough to protect the rights of rightsholders, and that bends too far in the interests of tech and Internet interests. The Copyright Office does not consider the interests of users, apparently, to be relevant, even though their experience of censorship is a daily fact on the Internet patrolled by copyright industry bots.