Google will soon ban fake news sites from using its ad network

Moving forward, we will restrict ad serving on pages that misrepresent, misstate, or conceal information about the publisher, the publisher's content, or the primary purpose of the web property," a Google spokesperson said in a statement given to Reuters. This policy includes fake news sites, the spokesperson confirmed. Google already prevents its AdSense program from being used by sites that promote violent videos and imagery, pornography, and hate speech.

Earlier today, someone asked me whether this development might pull Google out of the realm of CDA § 230(c) immunity, so I thought I'd address the issue here.

I doubt Google's closing of its advertising tools to "fake news" sites rises to the level of partial "creation or development" of content by an "information content provider" (which is not immune under CDA § 230(c)). If anything, the ban fits snugly within an (immune) "interactive computer service['s]" power to exercise traditional editorial functions* and "access software provider" functions** over third-party content. Plus, search engines in particular are afforded a great deal of leeway when it comes to CDA § 230(c) immunity.***

That said, perhaps Google's decision does not present a CDA § 230(c) issue at all, since the ban on "fake news" sites from using ad tools here occurs at the front-end rather than at some point after the third-party content has been submitted.

- - - 

* Decisions to publish, withdraw, postpone, or alter content. Zeran v. AOL, Inc., 129 F.3d 327, 330 (4th Cir. 1997); Batzel v. Smith, 333 F.2d 1018, 1031 FN. 18 (9th Cir. 2003).
** The term "access software provider” means a provider of software (including client or server software), or enabling tools that do any one or more of the following: (A) filter, screen, allow, or disallow content; (B) pick, choose, analyze, or digest content; or (C) transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content. 47 U.S.C. § 230(f)(4).
*** See Carafano v. Metrosplash.com. Inc., 339 F.3d 1119, 1125 (9th Cir. 2003); see also Parker v. Google, Inc., 422 F.Supp.2d 492, 501 (E.D.Pa. 2006), aff'd 242 Fed.Appx 833, 838 (3rd Cir. 2007); see also Goddard v. Google, Inc., 640 F.Supp.2d 1193, 1198 (N.D.Cal. 2009). 

The FCC just passed sweeping new rules to protect your online privacy

"Under the Federal Communications Commission’s new rules, consumers may forbid Internet providers from sharing sensitive personal information, such as app and browsing histories, mobile location data and other information generated while using the Internet . . .

Internet providers and Republican FCC commissioners complained that limiting the data collection of Internet providers gave an unfair advantage to other companies such as Google and Facebook that already make billions of dollars collecting data on users and selling it to advertisers."

New FCC regulations for internet providers now require obtaining subscribers' consent prior to selling data about their online behavior to third-party marketers. Though, it's unclear whether the rules will deter ISPs from continuing to condition the provision of services on user assent in their terms of service or privacy policies. 

Similar to how the EU's General Data Protection Regulation ("GDPR") particularizes user consent as a basis for compliance, the FCC's new rules require ISPs to obtain affirmative "opt-in" consent from consumers to user and share "sensitive" information (e.g., precise geo-location, financial information, health information, children's information, SSNs, web browsing history, app usage and the content of communications).

ISPs will still be allowed to use and share "non-sensitive" information, unless a customer "opts-out." And, some categories of individually identifiable customer information are considered "non-sensitive," such as a user's email address. Still, the rules stipulate that the sharing and use of such information must be "consistent with consumer expectations." Interestingly, it seems here that the FCC is tearing a leaf out of the European Commission's book, since the GDPR, too, requires data controllers to acquire valid user consent in a piecemeal manner, consistent with data subjects' reasonable expectations. To elaborate, the consent must directly refer to the specific purposes for which the data was initially collected. Should the purposes behind collection change, thereby straying beyond the contours of users' reasonable expectations, additional consent will subsequently be required. 

PSA: check out TLDRLegal

I came across this website earlier today, called TLDRLegal. What it does is translate popular software licenses (e.g., YouTube's Terms of Service) into plain, layperson English.

Plus, this site appears to be much active than my usual go-to, TOS;DR

Ownership and IP in the digital age.

Two legal scholars, Aaron Perzanowski and Chris Jay Hoofnagle, have created a fake e-commerce site called "Media Shop" as a means to study the behavior of online shoppers. They have published their findings in a paper called, "What We Buy When We Buy Now" (pdf). It's a solid read for anyone seeking to learn more about how traditional ownership rights transform in the digital context.

When you buy a book from Half Price Books, you have a pretty good intuition of your ownership rights with respect to that bound stack of pages. Once you've signed the receipt, it's yours to take home. Much like most things you own, it's your personal property, or in 4th Amendment terms, part of your personal "papers and effects." You can sell the book, lend it to a friend, or even light it on fire. Put simply, unless you're copying the book, you can do whatever you want with it sans the rights-holder's permission. 

Those commonly held intuitions about ownership begin to crumble when it comes to purchasing things in digital format. A book I buy and store on my Kindle is different from a physical copy in obvious ways: It won't gather dust on my bookshelf, nor will it have annotations scribbled onto the margins. But what's not so obvious is the large chunk of ownership rights you're giving up when purchasing digital, as opposed to physical, copies of books online. (Of course, this is not just limited to books; movies, games, music, and much more also come into the fold.) What's happening here is that your rights to those digital purchases are being continuously filtered through a maze of IP law. Moreover, your rights are significantly limited by the fine print in the Terms of Service that you agree to at the point of sale. More often than not, key provisions are couched in lengthy Terms of Service agreements, and as you might suspect, people generally don't take the time to read through those. (In fact, if it was your full-time job to read all your privacy policies--i.e., 8 hours/day--it would take you roughly 76 work weeks to complete the task.)

IoT and the Fourth Amendment: A Complicated Tango

I thought I'd share this lovely piece called, The Internet of Things and the Fourth Amendment Effects, as I drudge the tail-end of the spring semester--exams and deadlines looming large.

Professor Andrew Guthrie Ferguson succinctly writes about the far-reaching effects that contextual IoT devices and household products (e.g., "Smart" Barbie, Amazon's Kindle, Nest Learning Thermostat, among many more) might have on Fourth Amendment doctrine and jurisprudence. Specifically, his work seeks to answer two crucial questions in relation to America's modern understanding of the right to privacy in the context of high-tech surveillance capabilities:

  1. What is a Fourth Amendment "effect" in a world defined by interconnected, network like Internet of Things?
  2. Assuming that a Fourth Amendment "effect" has a broader definition including the data embedded in the object and the communication signals emanating from the device, then what expectation of security should attach to these effects? 

He writes, "[t]he reemergence of traditional terms of arts such as 'effects' adds new urgency to a redefinition of the terms consistent with modern technology (and even modern physics)." We all know IoT devices continuously and seamlessly record and embed an unfathomable amount of data. This is especially true when considering an interconnected network of, say, household IoT devices (which are increasingly popular in the marketplace). What people might not realize is that the insertion of IoT networks in the proverbial "home" could upturn traditional American legal thought.

From the American point of view, the prime danger to private sovereignty has traditionally been that “the sanctity of [our] home[s]"--in the words of a leading nineteenth-century Supreme Court opinion on privacy--would presumably be breached by government actors. Boyd v. United States, 116 U.S. 616, 630 (1886). To the American mind, what matters is sovereignty within one’s own home; the right to privacy, at its metaphoric core, is a right to hide behind the walls of one’s own home. Perhaps what is most important about this line of thought is that the symbolic “home” has heavily informed the “reasonable expectation of privacy,” a key legal standard in American privacy law. See Veronica School District 47J v. Acton, 515 U.S. 464 (1995). 

Now, given the increased use of household IoT products in U.S. homes, people are perhaps unintentionally opening themselves up for in-depth surveillance and undue intrusions of privacy by the government. This, to me, sounds like an unsavory development, since U.S. privacy law is largely sectorial. Ferguson artfully opines in his paper: 

IoT technology has the potential to generate an almost inescapable data web that monitors many aspects of one's life. From home appliances, to cars, to medical devices, the objects are continuously digitizing daily life. While society has recently been made aware of possible high-tech surveillance techniques involving cameras, drones, GPS tracking, and cell phone collection, it has not always envisioned the linking of disparate technologies on a very personal level. Knowing you called a certain number (cell data), drove to a certain house (drone/camera), and repeated that trip every week (GPS) pales in comparison to knowing those facts plus the time the bedroom light comes on in that house (through NEST systems), the elevated heartbeat in that bedroom (through health monitors), and the opening of a particular enchanted pill bottle (smart bill bottles) – all of which might provide a much better clue about the nature of your business at the house. Problems of aggregation and magnification heighten the potential personal invasion as a data rich environment creates a wider mosaic of life patterns. Police might no longer need to physically follow a suspect, when smart sensors allow them to do so virtually. 

Time will only tell whether virtual intrusions via IoT will constitute a "search" under the Fourth Amendment, or relatedly, whether the legal standard for one's "reasonable expectation of privacy" will one day include a layperson's "digital exhaust" that is emitted by her use of her IoT products, within her own home

More fodder for your browser.

1. Terms of Service; Didn’t Read (TOS;DR) 

“‘I have read and agree to the Terms’ is the biggest lie on the web,” contend the privacy advocates who run TOS;DR. 

TOS;DR is a collaborative user rights initiative that functions to sift through, break down, rate and label various websites’ terms of use/service and privacy policies, which are often too cumbersome for the average person to read, yet also are equally important to understand. Your digital rights largely depend on what exactly is laid out in these policies.

Here's the good news: You can easily download TOS;DR's browser add-on to your browser of choice, so that when you visit a particular website (e.g., Facebook, Google, LinkedIn) a dropdown will appear that translates tangly legalese and lays out the good, bad, and neutral aspects of that site’s terms of use/service and privacy policy in a user-friendly manner.

See below for an example of what this might look like: 

Write here...

2. Ghostery (For Consumers)

Ghostery (For Consumers) provides a free consumer extension that shows you, in list form, all the digital trackers on a particular website and lets you make informed decisions about which one to block. Equipped with such an extension, you have the power to decide which companies to trust and which to block. Ghostery has the largest database of companies that are tracking you—more than 2,000, in fact. (Full disclosure, though: I use Google Analytics.) 

I’ve had Ghostery for about a year(?) now, and I must say it’s been quite illuminating to see the dropdown that lists all the trackers various websites continuously employ to track consumer activity. In fact, I have found that you can roughly gauge a company’s overarching philosophies and attitudes towards consumer digital rights just by looking at the amount and type of trackers it uses. 

Here's a list of company websites that use a varying number of trackers, ordered from least number of trackers to most. 

  • Wikipedia: 0 
  • Reddit: 2
  • Amazon: 2
  • Tumblr: 3
  • LinkedIn (a company that has ruffled the features of the FTC in the past): 4
  • Wall Street Journal: 4
  • The New Yorker: 9
  • Buzzfeed: 12
  • The Economist: 15
  • New York Post: 16
  • Forbes: 17
  • The Huffington Post: 18
  • TMZ: 26

As you can imagine, Wikipedia and TMZ are poles apart in how each exploits available tracking services. (0 vs. 26)

3. uBlock Origin

uBlock Origin is a must-have browser extension, IMO. It’s easy on memory and CPU footprint, and yet can load and enforce thousands more filters than other popular blockers out there (e.g., AdBlock). I can attest to its usefulness; whenever I do turn it off, mostly out of curiosity, I immediately get the feeling that I have left a quiet mountain town and am now smack in the middle of Times Square.

As you can see, since installation, uBlock has blocked about 640,000 ads for me.

I hope you can see the so-called “Privacy Bargain” often alluded to by folks in this space doesn’t seem like much of a bargain. 

The 5 Things Every Privacy Lawyer Needs to Know about the FTC: An Interview with Chris Hoofnagle

Privacy Law expert Daniel Solove interviews Professor Chris Hoffnagle about his new book, Federal Trade Commission Privacy Law and Policy, which provides to readers a great volume of information and scholarly discussion about the FTC in general, its rich and sordid history, as well as its many activities and efforts in the privacy space. Endorsed by the likes of Solove as a "must read," the book, to me, seems like a worthy addition to my (ever-expanding) to-read queue on Kindle. 

Below is an excerpt of the interview I found particularly interesting: 

"SOLOVE: What is the FTC’s biggest failure?

HOOFNAGLE: The FTC has not found a way to police data brokers. Data brokers both create new privacy problems and intensify existing ones by offering mechanisms to secretly identify consumers and to link their otherwise pseudonymous behavior. The market provides little incentive for data brokers to recognize individuals’ privacy interests.

The FTC has failed to police data brokers because internally, it struggles to articulate how data brokers’ systemic undermining of privacy rights creates marketplace harms. My book offers several approaches to dealing with this problem, drawing upon how the FTC overcame similar challenges in false advertising cases."

Stray thoughts & more on data brokers-->

For further information on the social implications of data brokers providing services for the government: 

On the commercial end: 

  • The biggest issue bubbling out of the use of aggregate non-personal data by data brokers, in my mind, is that, often, certain PII can be inferred due to the fact that these data brokers have a wide breadth of information at their disposal and are able to draw fairly accurate conclusions about that data. 

  • Data brokers are a cash cow in today's commercial climate: Companies pay top dollar to third-party data brokers, who in turn provide them with valuable information that allows them to target their advertisements and marketing campaigns towards their consumers with much more precision.

  • In fact, this is largely why you will see that many companies' privacy policies try to explain away the disclosure and sharing of customers' non-personal information in sometimes circuitous and sugarcoated terms.

SXSW's Privacy Policy, for example, states the following: 

"G. Disclosure of Aggregate Information
SXSW may provide to third parties non-personal information about you that does not allow you to be identified or contacted and that is combined with the non-personal information of other users ("Aggregate Information"). For example, we might inform third parties regarding the number of users of our Site and the activities they conduct while on our Site. We might also inform a company that performs services or that provides products and/or services to SXSW (that may or may not be a SXSW business partner or an advertiser on our site) that "50% of our users live in the USA" or that "85% of our users have purchased products and/or services which can be downloaded from SXSW's Site." Depending on the circumstances, we may or may not charge third parties for this Aggregate Information. We may not limit the third parties' use of the Aggregate Information" (emphasis added).

Of course, there's also Google's Privacy Policy, which states, "[w]e may share non-personally identifiable information publicly and with our partners – like publishers, advertisers or connected sites" (emphasis added). The term, "partners," could mean a lot of things, especially for a company whose revenues exceed the GDP of several entire nations--Iceland, the Bahamas, Guatemala, just to name a few. 

Facebook is relatively more forthcoming about its use of non-personal information. Its Privacy Policy states, "[w]e work with third party companies who help us provide and improve our Services or who use advertising or related products, which makes it possible to operate our companies and provide free services to people around the world." 

Lastly, here's what LinkedIn's Privacy Policy says: "We may transfer your information and process it outside your country of residence, wherever LinkedIn, its affiliates and service providers operate." The catch-all term, "service providers," should give one pause: What does consenting to the sharing of customer information with "service providers" actually imply in a practical sense? 

This post has doled out more questions than answers, but there you have it!

Government Can't Force Apple to Bypass iPhone Lock Screen, New York Judge Rules

Apple cannot be compelled to bypass the lock screen on an iPhone seized by the government in a New York drug case, according an order issued today by magistrate Judge James Orenstein. It’s a significant rejection of the government's interpretation of the All Writs Act, which prosecutors have advanced in multiple jurisdictions across the country—most prominently, in connection with a locked iPhone 5c that once belonged to one of the San Bernardino shooters. Today's order comes just three days before amicus briefs will be submitted in the California case, and weeks before oral arguments are set to take place. 

[Read more on Electronic Frontier Foundation]

ISPs want “flexible” privacy rules that let them “innovate” with customer data

A battle over privacy at the FCC, sparked by last year's net neutrality rules, is quickly gaining speed. In the coming months, the FCC is expected to craft regulations on how broadband providers handle sensitive consumer data. Barring a court ruling, the commissioners could vote on dealing with new privacy rules as early as March of this year. As such, industry and privacy advocates on both sides of the issue have stepped up to the plate to make their case. 

One might ask why there a push to create new regulations exclusively for broadband service. Here's why: 

"The commission treated Internet service providers like traditional phone service to apply new rules requiring all Web traffic to be handled in the same way. That left the FCC in the difficult spot of applying privacy regulations for phone companies to broadband providers. Those rules protected information on whom a customer called and when, for example.

"But applying those regulations directly to new technology would have been a tall order for the agency."

Read the rest of the article on: The Hill

Consumer advocates hope the FCC will draft stricter rules for how companies such as Comcast and Time Warner Cable will treat customers' information. But large broadband industry groups are wary of the FCC crafting new rules and contend that any framework should match standards already in place.

"[Last week] [b]roadband industry lobby groups urged the FCC not to impose privacy rules that dictate 'specific methods' of protecting customer data, since that would prevent 'rapid innovation.' 

"ISPs should have 'flexibility' in how they protect customers’ privacy and security, said the letter from the American Cable Association, Competitive Carriers Association, Consumer Technology Association, CTIA, the Internet Commerce Coalition, the National Cable & Telecommunications Association, and USTelecom.

"The debate stems from the FCC's decision to reclassify fixed and mobile broadband providers as common carriers under Title II of the Communications Act. The FCC has said it intends to enforce Section 222 of Title II, which requires telecommunications carriers to protect the confidentiality of customers' proprietary information. But since the commission's existing privacy rules apply to telephone service rather than broadband, the FCC has to draw up new rules for Internet service. The phone rules protect personal information such as the numbers customers call and when they call them."

Read the rest of the article on: Ars Technica

Privacy and consumer groups hope that the FCC, through a new set of regulations, will use its authority in material ways that the FTC, which previously had legal authority over ISPs, couldn't. They argue that the FTC's privacy standard--that is, whether a certain business practice is "unfair or deceptive" (see Sec. 5 of the FTC Act, also known as 15 U.S. Code Sec. 45)--is not a strong enough of a standard to police broadband providers. Instead, they believe that the FCC already possesses ability to give their rules more teeth, at least more than what the common-law of FTC can offer.

The implications of this, of course, are huge. John Simpson, director of the privacy project at Consumer Watchdog explains, the FCC, "under reclassification [of Internet providers], can explicitly make rules that say thou-shalt-this, thou-shalt-not-that. And that’s an entirely different situation than what the FTC does." Indeed, what he says holds true, as the FTC is not allowed to tell businesses what kind of privacy practices it must have. Rather, it primarily operates in a reactionary manner, i.e., in response to complaints lodged against a company or to an already-existing issue with a company's posted privacy policy. 

Stuck in legislative limbo, industry groups, privacy advocates, lobbyists and the like will have no choice but to duke it out until some form of compromise is ultimately reached. 

More on the Apple-FBI Sqabble

Above: Privacy Law expert and law professor, Marc Rotenberg of the Electronic Privacy Information Center, says the implication, if Apple did unlock the San Bernardino gunman's phone, is that future phones would have to have a built-in backdoor.

Quite an informative listen, given that today the Department of Justice filed a motion to compel Apple to cooperate with a government investigation and provide it with technical assistance. In the motion, the DOJ argues that Apple "retains the technical ability to comply" with the federal judge's order and that its "current refusal to comply with the Court's Order, despite the technical feasibility of doing so, instead appears to be based on its concern for its business model and public brand marketing strategy."

Personally, I'm not convinced, and as I have pointed out my previous post, the end sought by the government here (though, undoubtedly important) does not justify the means.

Wikimedia Foundation removes The Diary of Anne Frank due to copyright law requirements

Yesterday, the Wikimedia Foundation had to remove the Dutch-language text of the Diary of Anne Frank from Wikisource due to the take-down provisions housed in the DMCA (specifically Sec. 512 of the Copyright Act). Remarkably, the Anne Frank text is still copyrighted in the United States, and will be until 2042. 

This is one example, unfortunately among many, that showcases the overly restrictive and bipolar nature of US copyright law. Here's why: 

  • The Copyright and Patent Clause in the US Constitution grants Congress the power to "promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Investors the exclusive Right to their respective Writings and Discoveries."
    • The term "limited Times" refers to the notion that an owner enjoys exclusive rights over his or her work for a limited period of time before the work enters the public domain, where anyone is free to use it.
  • And here's what has happened in the furtherance of the Clause, legislatively: 
    • In 1790, when Congress enacted the first copyright law, the duration of the copyright term was set at 14 years. The term of copyright was only changed once in the first 100 years of American nationhood.
    • The term length continued to snail upwards throughout the 1880s and early- to mid-1900s. In 1831, the maximum limit to the term was raised from 28 years to 42 by increasing the initial term of copyright from 14 years to 28. 
    • During the next 50 years, the term was increased once again. Congress extended the renewal term of 14 years to 28 years, setting the maximum term to 56 years.

[BUT then came the 1970s]

  • Copyright legislation that has been passed since the 1970s has dramatically shifted towards advancing the interests of copyright holders by statutorily extending copyright term lengths with much more frequency. 
  • Such a policy shift is primarily reflected in legislation enacted during that time--ranging from harsher penalties for infringement to longer copyright terms.
  • In the last 40 years, Congress has extended the terms of existing copyrights ELEVEN TIMES, and TWICE for the term of FUTURE copyrights.
  • We saw the culmination of this shift in Congress' mindset in the enactment of the Sonny Bono Copyright Term Extension Act. The duration of copyright protection was extended to life plus 70 years for non-corporate works. For corporate works, the duration was now set at 95 years from publication or 120 years from creation, whichever ended up being shorter.

At this juncture, I hope that you can see how contemporary copyright legislation completely runs afoul to the phrase, "limited Terms," in the Copyright and Patent Clause of the Constitution--a phrase that was meant by the Framers to circumscribe the scope of the time period during which a creator enjoys a monopoly over his or her own work. 

As Lawrence Lessig, founder of Creative Commons and highly esteemed copyright scholar, critically points out in his seminal book on copyright reform, Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity:

"[E]ach time copyrights are about to expire, there is a massive amount of lobbying to get the copyright term extended. Thus a congressional perpetual motion machine: So long as legislation can be brought (albeit indirectly), there will be all the incentive in the world to buy further extensions of copyright."

In sum, the way copyright has been and continues to be legislated in Congress (I think) quite clearly countermands Congress's original intent.

Is the DMCA an Effective Way to Take Down Infringing Content?

In late January of this year, the U.S. Copyright Office launched a comment submission process, seeking to survey how the “safe harbor provisions” in Section 512 of the Digital Millennium Copyright Act (DMCA) have been received by the community as well as to get public input on proposed exemptions to the DMCA's prohibition against circumvention of technological measures that control access to copyrighted works. This process takes place every three years, and comment submission is open until March 21, 2016. That all said, D.C. is a city of Ents, so I suspect material overhaul of the DMCA is a far-off outcome.

Section 512 provides, under certain circumstances, a “safe harbor” for “service providers”—that is, web hosting communities and web publishers, including those (like YouTube, for instance) that allow third-party users to upload potentially infringing content. It shields ISPs from liability for infringement of copyright that would otherwise befall upon them, so long as they do not have “actual knowledge” of infringing material or activity. To elaborate, the DMCA was a much-needed amendment to the Copyright Act—it was the embodiment of a congressional effort to bring copyright law up to date; Congress recognized that enforcement of copyright in its then-current form had a real potential to chill technological advancement and innovation. Thus, the DMCA acknowledged through its provisions that, policy-wise, web caching, for instance, is indeed beneficial for society despite its implications on copyright, since they allow for the efficient operation of the Web. The Act also recognized that ISPs are integral to both the growth and development of the Internet and the exchange of ideas, and as such, should not bear the brunt of the illegal actions of third-party users.

Over the years, Section 512 has reeled in mixed reviews. While in all likelihood copyright owners and service providers will continue to rely on the “safe harbor” and “take-down” provisions as a popular way to address and remove infringing content online, those provisions in practice will prove increasingly brittle in face of an exponential proliferation of Big Data, file-sharing, IoT-driven products, contextual technologies and the like, which together have already flipped mass copyright compliance, along with its normative and doctrinal underpinnings, on its axis. I believe this is just the tip of the iceberg. Yesterday, I had the privilege of attending a talk given by Michael Stewart, CEO of Lucid, at the Robert S. Strauss for International Security and Law, and I was struck by what he stated as a simple, poignant truth about today’s humanity in the context of tech: “We’re in a data explosion [whose] evolutionary curve cannot be measured [and] we are starting to lose ourselves in the midst of that.”

Indeed, in a Federal Register notice about the comment submissions, the Copyright Office opined: 

"Today, copyright owners send takedown notices requesting service providers to remove and disable access to hundreds of millions of instances of alleged infringement each year. The number of removal requests sent to service providers has increased dramatically since the enactment of section 512

"While Congress clearly understood that it would be essential to address online infringement as the internet continued to grow, it was likely difficult to anticipate the online world as we now know it . . ."

Consequently, website operators deal with an unwieldy amount of take-down notices while copyright owners find the process to be unpredictable and frustrating. 

We are at the heels of a technological revolution that has been expedited by the accrual of knowledge, innovation and open-source philosophies. At such this juncture, I wonder where that all ends up leading us. Let’s unspool a bit: Unless and until anything changes, I think the structural integrity and fortitude of the DMCA’s “safe harbor provisions” will eventually collapse under the weight of emerging technologies. Interestingly, as legal theorists Samir Chopra and Laurence F. White discuss in their work, A Legal Theory for Autonomous Artificial Agents:

“[S]uch protections may not be enough to keep ISPs from becoming part of the law-enforcement superstructure. The DMCA requires ‘standard technical measures,’ supposedly ‘protective of copyright,’ be implemented in order to qualify for safe harbors. This implies safe harbor protections will decline hand in hand with advances in these measures, for service providers will be required to continuously upgrade such technologies . . . [I]mplementing such technology thoroughly will, ironically enough, attribute to ISPs actual knowledge of infringing practices, thus preventing their accessing the safe harbor required if they fail to make ‘expeditious’ action to remove access to offending material.”

So, while the DMCA states that service providers are obliged to implement “standard technical measures” to identify and protect copyrighted works, it is unclear what will transpire when technologies develop to a point where deep content inspection becomes an accessible and commonplace tool for ISPs to deploy. This is because when the infringing nature of a website becomes apparent—even from a brief and casual viewing—establishing a link to that kind of site would not be appropriate under DMCA standards. Yet there is an increasing desire on the part of ISPs to indulge in deep content inspection for digital rights management. Thus, with a somewhat snaking gait, those ISPs will eventually end up risking their DMCA safe-harbor qualification due to their use of increasingly sophisticated technologies.

Perhaps, then, the Copyright Office is cognizant of such a change in tide—a technological, social, and legal one—and, to its credit, launching a comment submission process may be a reflection of that cognizance as well as an attempt at a first step towards shaping new, meaningful policy and law that better align with modern tech (. . . which, by analogy, would end up being much like what the DMCA was to the Copyright Act).  

India, Egypt say no thanks to free Internet from Facebook

Most people in developing countries have little to no Internet access. About a year ago, Facebook CEO, Mark Zuckerberg, introduced Free Basics to India, as a way to provide web access to the developing world. However, many Indians have viewed it more as an aggressive PR ploy than anything else. Although purported to be a free internet service for the poor, Free Basics is actually a watered-down version of Facebook with some other services tacked on (e.g., weather reports and job listings). This, of course, has drawn the ire of Indian telecom regulators and many Internet activists.

"[T]he initiative has hit a major snag in India, where in recent months Free Basics has been embroiled in controversy — with critics saying that the app, which provides limited access to the Web, does a disservice to the poor and violates the principles of 'net neutrality,' which holds that equal access to the Internet should be unfettered to all.

Activist groups such as Save the Internet, professors from leading universities and tech titans such as Nandan Nilekani, the co-founder of Infosys, have spoken out against it. Another well-known Indian entrepreneur dubbed it “poor Internet for poor people.”

Certainly web access should be held as a human right--it is vital to global education and advancement, and more generally, to the spread of knowledge. But Facebook's interest in all of this, to me, evokes a more sinister undertone. To illustrate, the company's campaigns have vigorously conflated Free Basics with digital equality and free internet services; here are some examples (sourced from Reddit user, rdiaboli):

On top of that, Facebook has also sent text messages/SMSs to people in regional languages: 

ફેસબુક દ્વારા ફ્રી બેસિક્સ સેવા બધા માટે બુનિયાદી ઈન્ટરનેટ સેવાઓ નિશુલ્ક ઉપલબ્ધ કરાવે છે. પણ આ બધી સેવાઓ પર પ્રતિબંધ લાગવા ની આશંકા છે. કૃપા કરી 1800 209 0921 પર આજેજ એક મિસ્ડ કોલ આપી અમારી મદદ કરો.

Rough translation: Free Basics by Facebook provides basic internet services for free, but these services may get banned. Please help us by making a missed call today to this number.

While we can all agree that digital equality is quite desirable, I think Facebook is really doing this to corner out the ad market and control of services and information in third-world countries. In other words, Free Basics--though guised as free Internet access--proposes, in actuality, a controlled and curated environment run by Western corporations rubbing elbows--all of which runs afoul with net neutrality rules. (Interestingly, Free Basics bears an uncanny resemblance to the "free" ad-run platform from which AOL developed decades ago in the US.) 

As Reddit user, thegreatmaximillianj, puts it: 

"Net neutrality is actually much more important for a poor country than a rich one. 

In rich countries net neutrality is about things like Comcast slowing down Netflix. Net neutrality is a luxury argument, dealing with high data consumption issues.

In India, net neutrality is about how some newspapers are available with no data charge, and others aren't.

Now we are dealing with something fundamental: A person wants to read an article about the government and the Free Basics interface show them articles from preferred free news sources, but the second they go looking for another news source which may offer a different opinion/information the Free Basics interface says 'you have to pay for this. This isn't a preferred news source.'

That's a lot of power over the poor. And it would all be in the hands of Facebook, who has already shown a propensity to use the Facebook interface in India to achieve its own goals.

Free Basics means that a large percentage of the earth's population will be stuck in an internet box of Facebook's curation, their understanding of the world will be shaped by Facebook's preferences and business relationships.

That's why net neutrality is far more important in the poor world than it is in the rich."

I couldn't have said it better. The issue of net neutrality may yield many more negative externalities in poorer countries than in richer ones, which may end up doing much more bad than good. Plus, I must ask: Since when has Facebook been the arbiter of deciding which sites are essential for India's poor and which sites are not? At the end of the day, Facebook will always be a profit-seeking company--NOT an ISP or non-profit.

The whole affair reeks of digital colonization, and a quick history lesson will tell us that India has been there and done that.

How copyright is killing your favorite memes

Yet another example of how copyright exclusivity lags behind well-established "copynorms" and threatens to chill online free speech and creative expression. 

 "Socially Awkward Penguin" is a known starlet of Internet meme culture. But it also happens to be the intellectual property of National Geographic. In the past year, Getty Images (the company's licensing agency), has pursued multiple infringement cases involving the meme against a host of blogs and other posters, most of whom were pushed into agreeing to non-disclosure. However, when Getty attempted to collect licensing fees from a German blog called Geek Sisters, its parent site, getDigital, publicly posted Getty's letters online. And now, that has provoked a conversation about what copyright protection means in an era characterized by an abundance of online creative expression, remixing, and file-sharing.

'The Awkward Penguin is not just a random image we stole from Getty’s database, but one of the most well-known Internet memes,' the company protested in its blog post. Said Bastian Krug, the online marketing manager at getDigital: 'We have no idea why they chose us.'

National Geographic/Getty's moves seem opportunistic, if you ask me. (Maybe Murdoch is to blame.) Indeed, others feel the same way. Krug and his colleagues suspected the company was targeting small blogs that aren't necessarily equipped to push back against a powerful industry player like Getty. In the end, getDigital settled; litigation is expensive, after all. 

Interestingly, there hasn't ever been a court case like this, but perhaps I can weigh in on what might happen should a small-time blog opt to enter into litigation against a larger company, which doubtless is armed with a deeper coffer for legal fees.

If Geek Sisters decided not to settle, it would most likely have argued fair use. On the statutory end, the line between what is considered a "derivative work" and what is viewed as "transformative" under Fair Use is rather opaque. I think it would take Rowling's Sorting Hat relatively more time to decide where to house the meme. If the meme is considered a "derivative work," companies like Getty have every right to seek damages and fees. But if it's "transformative," bloggers and Internet users would be able to go on their merry way and legally continue to use, alter and share the image. 

On the one hand, we've got copyright exclusivity, gilded by not only the Copyright Act, but also the Constitution (see Art. I, Sec. 8, Cl. 8):

  • Sec 106 of the U.S. Copyright Act ("Exclusive rights in copyrighted works") defines the six exclusive rights copyright owners enjoy over their work, and two of those are:
    • the right "to prepare derivative works based upon the copyrighted work" and
    • the right "to reproduce the copyrighted work in copies or phonorecords." 

On the other hand, Fair Use provisions statutorily circumscribe the scope of that exclusivity:

  • Sec 107 outlines limitations on exclusive rights. It states the following: "Notwithstanding the provisions of sections 106 [see above] and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include—
    1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; 
    2. the nature of the copyrighted work; 
    3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; 
    4. the effect of the use upon the potential market for or value of the copyrighted work" (emphasis added).

As you can see, the criteria for what constitutes a "fair use" is intentionally broad, thereby requiring fact-heavy investigation. With respect to the "Socially Awkward Penguin" meme, a lot would depend on the nature of the purpose and character of the alleged infringer's use of the image in question, particularly whether it was used in a commercial capacity.

Personally speaking, I think of memes as a means for online social commentary, satire, or just plain humor. Especially on sites like Reddit and Facebook, a popular meme can function as an "inside joke" of sorts that's shared within a large community.

In any case, Geek Sisters wasn't using "Socially Awkward Penguin" for commercial purposes, but since the blog settled, analysis of its fair use of the meme never took place.

"While getDigital is a for-profit retail company, similar to Virginia’s Think Geek, Geek Sisters is a non-commercial blog — in fact, it’s pretty damn obscure. (The penguin post got no comments, and has never been shared on Twitter or Facebook.) . . . In the six years that Getty and National Geographic have allowed the meme to flourish, it has far transcended Mobley’s original photo: It’s a remix, a discourse, a pastiche assembled — like so much of popular Internet culture! — from the aggregated efforts of millions of people."

This case is a clear example of the rigid line copyright law has drawn between exclusivity and access.

But the dichotomy we see here doesn't stop at memes. Popular music genres like hip hop and EDM, for instance, also come within the fold. At least the meme case has a strong fair use argument, since the blog's use of the meme was non-commercial. But isn't there a potential argument argument for fair use to make room for benign commercial uses of works (e.g., Girl Talk's collage-style tracks)?(Notice how Section 107 describes acceptable uses of copyrighted works as non-commercial or for non-profit purposes.) Sampling, collaging and remixing are stylistic choices borne out of technological advancement and are key not only to Internet culture in general but also to Hip Hop and EDM artist, for example. Ergo, I think the issue should be framed as a question of policy, rather than one of black-letter law. We should ask ourselves: Is it sound to construe collage-style uses of copyrighted material as infringing? Should we liken them to mass online piracy? 

Only time will tell how major content industries and lawmakers will choose to respond to future cases like this one. I think figuring out a way for the law to balance i) maintaining incentives for artists to continue to create cultural works with 2) facilitating public access to and use of cultural works will be a formidable, but important challenge, but a necessary one for bringing Copyright Law up to speed with today's technology. 

Want more? Check out my thoughts on copyright exclusivity and its rightful place in the Information Age by... clicking here!

German court says Amazon e-mail 'share' function unlawful

A German court has declared Amazon's "share" feature, which encourages Amazon customers to share links to Amazon products with their contacts, as unlawful. The Amazon “share” feature invites customers to share a product via Facebook, Twitter, Pinterest, and most controversially, e-mail.

"The court said on Monday that sharing by e-mail without approval of the recipient was illegal. It is 'unsolicited advertising and unreasonable harassment,' the regional court in Hamm said, confirming the ruling of a lower court in Arnsberg."

It's no surprise that enterprising marketing practices and the law are butting heads more frequently. The ruling of this case is reminiscent of an earlier decision that took place in 2012, in which the German Federal Court of Justice ruled that Facebook's "Friend Finder" feature did not provide adequate privacy protection for users. The feature essentially allowed Facebook users to use tools provided by the social network to reach out to promote content and services which would require the non-users to sign up. 

But regulatory crackdowns haven't only happened across the pond. Here in the States, LinkedIn's aggressive marketing ploys (i.e., its "Add Connections" service) have landed it square in the middle of a class action lawsuit. And rightly so.

Frankly, I think these sorts of decisions are setting the right tone. I'm curious to see where this all will head.

The Legal Rift Between Copyright Doctrine And "Copynorms" With Respect To The Music Industry: A Policy Analysis

Check out a policy and legal analysis I put together last year that evaluates the following: 1. current copyright doctrine,
2. how it is out of step with its constitutional origins and justifications (i.e., the provisions set forth in Article 1, Sec. 8 of the U.S. Constitution: promoting the "Progress of Science and useful Arts" and securing for "limited Times" the monopoly a copyright owner enjoys over his or her work before it enters the public domain), as well as
3. current "copynorms," and whether the Creative Commons licensing framework is a viable solution towards bridging that gap. 

 

 

There is no wealth like knowledge,
and no poverty like ignorance.

An Overview of Privacy Law

Privacy Law is a relatively nascent and murky interdisciplinary field of law. Considered in light of a ballooning and developing world of contextual tech (mobile, social media, Big Data and "little data," sensors, geo-location services, etc.), this field of law and the legal and policy decisions made in connection to it, to me, are glimpses of a future (perhaps dystopian to some) in which privacy considerations will, in terms of substantive law, categorically reign above most else. Even more interestingly, other areas of law are necessarily implicated. Comparative Law, for instance, is subsumed under considerations surrounding privacy. To illustrate, the US and EU regulatory regimes differ greatly with respect to how each conceptually views privacy--this can be attributed in part to cultural and anthropological differences between the two. EU Law (e.g., German Basic Law) places great value in citizens' Right to Personality and Right to be Forgotten, to name a few, whereas the US embraces an "everything is fair game"/safe harbor mentality that is markedly less rigorous in regulatory terms. And we've seen what such a lack of uniformity produces: several US-based multinational companies inevitably butting heads with EU regulators (e.g., Google, Facebook, etc.).  

Here is a great overview on the different types of Privacy Law that I recently came across. The attachment is sourced from a larger work, Privacy Law Fundamentals by Daniel J. Solove and Paul M. Schwartz. 

The Human Face of Big Data (2014)

B/B+

A decent (though, at times repetitive) documentary for those of you who are curious about the implications--both good and bad--of Big Data, that is, the accrual of, as they say, "digital exhaust." Big Data produces tension between 1) the boons reaped from the spread and collection of information and 2) the attendant loss of privacy. One must ask, then: How should society through law and policy reconcile these two very important needs?

Skip to 11:05 for an interesting illustration of the type of insight Big Data can offer, where an MIT professor decided to track his infant son's language acquisition through data analysis with a visual granularity of detail. (For more on "The birth of a word.") Deb Roy conducts research at the MIT Media Lab on language, games, and social dynamics at the intersection of A.I. and cognitive psychology. He is also Chief Media Scientist of Twitter. 

Skip to 38:00 and watch until 46:00 to get a general sense of what Big Data and A.I. could mean for privacy and democracy in the near future.