Reaction: Why Safe Harbors will Fail
Copyright law, at least in the United States, tends to be very strict. You can copy some portion of a work under “fair use” rules, but, for most works, you must ask permission before sharing content created by someone else. But what about content providers? If a content provider user uploads a “song cover,” for instance—essentially a remake of a popular song, not intended to create commercial value for the individual user—should the provider be required to take the content down as a violation of copyright? Content providers argue they should not be required to remove such content. For instance, in a recent article published by the EFF—
Content providers, then, reason that it would be impossible for content providers to operate if they had to police their user’s postings for copyright. The line of argument seems reasonable, at first—until you encounter something like this—
The problem: how can large content providers promise to filter posts based on their content, while also saying they cannot filter posts based on their content? It seems like providers want to have it “both ways,” to allow copyrighted material to flow freely on their platforms, while restricting material they deem offensive.
There is a financial reason for this inconsistency, of course. In the one case, governments are stating they will fine large providers if they do not do something about “hate speech.” In the other case, the only person who complains is the owner of the copyright—and until they do, the provider can build a lot of user engagement, which leads to better platform engagement, and hence to higher profitability.
The problem is this double standard cannot stand forever. Someone, somewhere, is going to notice. And when they do, the providers will lose their “we can’t filter all this stuff” excuse, and safe harbor will fail.