Reaction: Why Safe Harbors will Fail

Copyright law, at least in the United States, tends to be very strict. You can copy some portion of a work under “fair use” rules, but, for most works, you must ask permission before sharing content created by someone else. But what about content providers? If a content provider user uploads a “song cover,” for instance—essentially a remake of a popular song, not intended to create commercial value for the individual user—should the provider be required to take the content down as a violation of copyright? Content providers argue they should not be required to remove such content. For instance, in a recent article published by the EFF—

Platform safe harbors have been in the crosshairs of copyright industry lobbyists throughout 2017. All year EFF has observed them advancing their plans around the world to weaken or eliminate the legal protections that have enabled the operation of platforms as diverse as YouTube, the Internet Archive, Reddit, Medium, and many thousands more. Copyright safe harbor rules empower these platforms by ensuring that they are free to host user-uploaded content, without manually vetting it (or, worse, automatically filtering it) for possible copyright infringements. Without that legal protection, it would be impossible for such platforms to operate as they do today. —Jeremy Malcom @ Deep Links

Content providers, then, reason that it would be impossible for content providers to operate if they had to police their user’s postings for copyright. The line of argument seems reasonable, at first—until you encounter something like this—

Facebook (FB, +1.40%) said on Wednesday it would introduce tougher rules on who can make money from advertising on its network, responding to criticism that it makes it too easy for providers of fake news and sensational headlines to cash in. —Fortune

Some of the biggest U.S. tech firms have signed up to a code of conduct formulated by the European Commission, in which they agree to help fight against the spread of hate speech in Europe. —Fortune

Google has pledged to continue developing advanced programs using machine learning to combat the rise of extremist content, after it found that it was both faster and more accurate than humans in scrubbing illicit content from YouTube. The company is using machine learning along with human reviewers as part of a mutli-pronged approach to tackle the spread of extremist and controversial videos across YouTube, which also includes tougher standards for videos and the recruitment of more experts to flag content in need of review. —the Gaurdian

The problem: how can large content providers promise to filter posts based on their content, while also saying they cannot filter posts based on their content? It seems like providers want to have it “both ways,” to allow copyrighted material to flow freely on their platforms, while restricting material they deem offensive.

There is a financial reason for this inconsistency, of course. In the one case, governments are stating they will fine large providers if they do not do something about “hate speech.” In the other case, the only person who complains is the owner of the copyright—and until they do, the provider can build a lot of user engagement, which leads to better platform engagement, and hence to higher profitability.

The problem is this double standard cannot stand forever. Someone, somewhere, is going to notice. And when they do, the providers will lose their “we can’t filter all this stuff” excuse, and safe harbor will fail.