Read the Code of Best Practices in Fair Use for Software Preservation
The basic outlines of fair use law are pretty well settled. It has been 25 years since the U.S. Supreme Court did a reset on fair use jurisprudence in Campbell v. Acuff-Rose, and almost 15 years since the launch of the Best Practices in Fair Use project. “Transformativeness” rules, and the courts have made the task of distinguishing those uses that are transformative from those that are merely substitutional straightforward. It’s equally clear that uses that put copyrighted material in new contexts can qualify along with those that involve modifying it. Likewise, fair use can apply to activities that don’t themselves involve creating new copyrightable content. Notably, the approaches taken by the various federal Circuit Courts of Appeals have converged substantially, a state of affairs well represented in Authors Guild v. Google, Inc., 804 F.3d 202 (2015), where the Second Circuit relied on precedents from all over to seal its analysis.
All this, however, leaves two questions open about the best practices project and its premises:
- Is it the case (as the best practices posit) that the law of fair use is predictable enough so that well-informed practitioners (and their lawyers) can make generally reliable forecasts about how it would apply to their own work?
- Is there any reason to think that guidance documents rooted in professional consensus have a special role to play in guiding those forecasts?
Those questions are addressed below.
The truism that fair use is a contextual, case-by-case inquiry does not automatically translate into a conclusion that its applications are hard to predict or unreliable. Indeed, many institutions that rely on fair use have their own internal protocols that testify to the stability of the doctrine. Here, for example, is a memo by the U.S. Patent and Trademark Office, explaining why its own copying of scientific articles is legal under fair use, because it is undertaken for “a new and different purpose than for which [they were] created.” Because most users rely on fair use in a limited range of different contexts, the analysis that applies to one use case will tend to carry over to other similar ones.
More broadly, scholarly evidence for the predictability and reliability of fair use continues to accumulate. For instance:
- In 2009, Professor Pamela Samuelson at UC-Berkeley Law School wrote a magisterial article classifying and reanalyzing a wide range of recent cases, which demonstrates that contextually similar fair use cases tend to be resolved in similar ways;
- Along different but complementary lines, a 2011 law review article by UCLA’s Neal Netanel, describing current judicial decision-making on fair use. His point is that when purpose of a new use is transformative, and the extent of the use is proportionate to that purpose, fair use almost always triumphs.
- Another indispensable article, from Matthew Sag of Loyola-Chicago Law School in 2012, takes on the question of predictability as an empirical question, and concludes that “based on the available evidence, the fair use doctrine is more rational and consistent than is commonly assumed.”
- Of possible interest is a 2012 article of mine, arguing that that much unlicensed use of copyrighted content in and around should, in fact, be considered transformative, and that educators should embrace this way of looking at what they do.
- Most recently, in 2019, professor Clark Asay undertook another study of the fair use case law, focusing on the courts’ use of the transformative use concept and how that notion drives the entire fair use calculus. He concludes that “parties that win the transformative use question win the overall fair use question at extremely high rates.”
In short, anyone who is interested in putting fair use to work can feel a high level of confidence that a conscientious, up-front fair use analysis will hold up if subjected to pressure, especially if it is grounded in a strong claim of transformativeness.
That said, why take an approach to determining fair use that is rooted in professional consensus, rather than (for example) negotiating standards with right holders or consulting legal experts? The shortcomings of the former approach, which has given rise to various ill-fated fair use “guidelines” over the years, are chronicled in a 2001 law review article by legal scholar Kenneth Crews, documenting how the use of negotiated guidelines, co-designed by rights holders with no stake in the mission of higher education or libraries, to establish fair use claims has repeatedly disappointed and frustrated educators and librarians.
The affirmative case for community-based fair use standards is made by history. At the heart of this approach is the record of almost 175 years of fair use decision in the U.S. courts, showing that courts are influenced by evidence of professional consensus within communities of practice about what constitutes fair use. A good resource on this point and others relating to the growth in use of fair use best practices codes in the United States is a short book entitled Reclaiming Fair Use: How to Put Balance Back in Copyright Law (Aufderheide & Jaszi, 2d edition 2017, University of Chicago Press). Complementary material also is available on our fair use homepage.
For further analysis of the trends in fair use, demonstrating the vitality of the best-practices approach, we recommend these materials:
- The foundational article published in 2004 by Professor Michael Madison, entitled “A Pattern-Oriented Approach to Fair Use,” shows that judges making fair use decisions tend to bless socially beneficial patterns and practices of use, and that consequently communities that can tell a compelling story about their practices have a better chance of winning favorable fair use decisions. The fundamental insight of this article is the basis for all of the Codes of Best Practice, which endeavor to find and articulate the rationale for socially beneficial fair uses in a series of practice communities. Madison briefly revisited the topic, taking into account the experience to dare with Best Practices documents, in 2012.
- A Note from the 2008-2009 volume of the Harvard Law Review, the most widely respected general legal periodical, praising the best practices approach as a moderate, practical way of securing the benefits of the doctrine to all of its potential beneficiaries.
- A short 2012 article introducing the benefits of codes of Best Practices by expert copyright litigators Jennifer M. Urban (of the Berkeley Law School Samuelson Clinic) and Anthony Falzone (now at Pinterest), originally published in a special issue of The Journal of the Copyright Society of the U.S.A., devoted to the fair use doctrine today.
- From 2013, a scholarly article on the “orphans works” problem by David R. Hansen, Kathryn Hashimoto, Gwen Hinze, Pamela Samuelson and Jennifer M. Urban, discussing the ways in which Best Practices documents have encouraged communities to establish their fair use rights.
- Published in 2014, a paper by University of Oregon communications scholar Jesse Abdenour, arguing that the Statement of Best Practices in Fair Use for Documentary Filmmakers (2005) has made a contribution to shaping fair use jurisprudence in the field.
- A fascinating argument by Niva Elkin-Koren and Orit Fischman-Afori, from 2015, explaining the actual and potential influence of Best Practices documents as sources in terms of the philosophy of Legal Pragmatism.
As Professor Madison notes there also has been some criticism of Best Practices, with “some scholars express[ing] concern that the Statements tend to lock in backward-looking, customary interpretations of law and practice.” Happily, at least to date, this fear does not seem to have materialized: members of practice communities with Best Practices documents in place continue to think creatively about fair use, and to take account of the doctrine’s flexible, dynamic character.