Is the DMCA an Effective Way to Take Down Infringing Content?

In late January of this year, the U.S. Copyright Office launched a comment submission process, seeking to survey how the “safe harbor provisions” in Section 512 of the Digital Millennium Copyright Act (DMCA) have been received by the community as well as to get public input on proposed exemptions to the DMCA's prohibition against circumvention of technological measures that control access to copyrighted works. This process takes place every three years, and comment submission is open until March 21, 2016. That all said, D.C. is a city of Ents, so I suspect material overhaul of the DMCA is a far-off outcome.

Section 512 provides, under certain circumstances, a “safe harbor” for “service providers”—that is, web hosting communities and web publishers, including those (like YouTube, for instance) that allow third-party users to upload potentially infringing content. It shields ISPs from liability for infringement of copyright that would otherwise befall upon them, so long as they do not have “actual knowledge” of infringing material or activity. To elaborate, the DMCA was a much-needed amendment to the Copyright Act—it was the embodiment of a congressional effort to bring copyright law up to date; Congress recognized that enforcement of copyright in its then-current form had a real potential to chill technological advancement and innovation. Thus, the DMCA acknowledged through its provisions that, policy-wise, web caching, for instance, is indeed beneficial for society despite its implications on copyright, since they allow for the efficient operation of the Web. The Act also recognized that ISPs are integral to both the growth and development of the Internet and the exchange of ideas, and as such, should not bear the brunt of the illegal actions of third-party users.

Over the years, Section 512 has reeled in mixed reviews. While in all likelihood copyright owners and service providers will continue to rely on the “safe harbor” and “take-down” provisions as a popular way to address and remove infringing content online, those provisions in practice will prove increasingly brittle in face of an exponential proliferation of Big Data, file-sharing, IoT-driven products, contextual technologies and the like, which together have already flipped mass copyright compliance, along with its normative and doctrinal underpinnings, on its axis. I believe this is just the tip of the iceberg. Yesterday, I had the privilege of attending a talk given by Michael Stewart, CEO of Lucid, at the Robert S. Strauss for International Security and Law, and I was struck by what he stated as a simple, poignant truth about today’s humanity in the context of tech: “We’re in a data explosion [whose] evolutionary curve cannot be measured [and] we are starting to lose ourselves in the midst of that.”

Indeed, in a Federal Register notice about the comment submissions, the Copyright Office opined: 

"Today, copyright owners send takedown notices requesting service providers to remove and disable access to hundreds of millions of instances of alleged infringement each year. The number of removal requests sent to service providers has increased dramatically since the enactment of section 512

"While Congress clearly understood that it would be essential to address online infringement as the internet continued to grow, it was likely difficult to anticipate the online world as we now know it . . ."

Consequently, website operators deal with an unwieldy amount of take-down notices while copyright owners find the process to be unpredictable and frustrating. 

We are at the heels of a technological revolution that has been expedited by the accrual of knowledge, innovation and open-source philosophies. At such this juncture, I wonder where that all ends up leading us. Let’s unspool a bit: Unless and until anything changes, I think the structural integrity and fortitude of the DMCA’s “safe harbor provisions” will eventually collapse under the weight of emerging technologies. Interestingly, as legal theorists Samir Chopra and Laurence F. White discuss in their work, A Legal Theory for Autonomous Artificial Agents:

“[S]uch protections may not be enough to keep ISPs from becoming part of the law-enforcement superstructure. The DMCA requires ‘standard technical measures,’ supposedly ‘protective of copyright,’ be implemented in order to qualify for safe harbors. This implies safe harbor protections will decline hand in hand with advances in these measures, for service providers will be required to continuously upgrade such technologies . . . [I]mplementing such technology thoroughly will, ironically enough, attribute to ISPs actual knowledge of infringing practices, thus preventing their accessing the safe harbor required if they fail to make ‘expeditious’ action to remove access to offending material.”

So, while the DMCA states that service providers are obliged to implement “standard technical measures” to identify and protect copyrighted works, it is unclear what will transpire when technologies develop to a point where deep content inspection becomes an accessible and commonplace tool for ISPs to deploy. This is because when the infringing nature of a website becomes apparent—even from a brief and casual viewing—establishing a link to that kind of site would not be appropriate under DMCA standards. Yet there is an increasing desire on the part of ISPs to indulge in deep content inspection for digital rights management. Thus, with a somewhat snaking gait, those ISPs will eventually end up risking their DMCA safe-harbor qualification due to their use of increasingly sophisticated technologies.

Perhaps, then, the Copyright Office is cognizant of such a change in tide—a technological, social, and legal one—and, to its credit, launching a comment submission process may be a reflection of that cognizance as well as an attempt at a first step towards shaping new, meaningful policy and law that better align with modern tech (. . . which, by analogy, would end up being much like what the DMCA was to the Copyright Act).