What is it about the European Union and bad tech laws with boring names? Brussels managed to transform four harmless letters into a byword for irritating compliance-induced spam and pop ups as well as a consolidation of power for the internet’s biggest players. Now that the GDPR dust has settled, along comes Article 13 of the Directive for Copyright in the Digital Single Market, which was approved by the European Parliament’s Committee on Legal Affairs yesterday.
Article 13 requires websites to take “appropriate and proportionate” measures to make sure copyrighted material doesn’t appear on their pages. It would also require sites to “provide rightsholders with adequate information on the functioning and the deployment of measures”. Then there is the jargon-laden instruction for Member States to “facilitate… cooperating between the information society service providers and rightsholders through stakeholder dialogues to define best practices”.
Those appropriate and proportionate measures mean “content recognition technologies” along the lines of Content ID, the copyright filter that Google uses to stop YouTube users from uploading copyrighted videos. As open internet campaigner and writer Cory Doctorow has explained, everyone hates the filter: “Big rightsholders say that it still lets crucial materials slip through the cracks. Indie rightsholders say that it lets big corporations falsely claim copyright over their works and take them down. Google hates Content ID because they spent $60,000,000 developing a system that makes everyone miserable, and YouTubers and their viewers hate it because it overblocks so much legit content.”
The EU seems to have looked at this way of doing things and decided it should be extended – by law – not just to all online videos, but to everything on the internet.
It is hard to overstate the extent of the threat this piece of legislation is to online culture as we know it. In an open letter to European Parliament President Antonio Tajani, a group of internet pioneers that includes Tim Berners-Lee, Vinton Cerf and Jimmy Wales spell out the danger: “Article 13 takes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance of its users.”
Article 13 essentially amounts to an outsourcing of copyright enforcement to internet companies and imposes a requirement to check everything posted online for copyright infringement. That will have grave consequences for both free expression and competition.
That the legislation is bad news for free expression is inevitable for two reasons. The first is the inadequacy of the technology. As Doctorow explains, YouTube’s filter just isn’t very good at distinguishing copyright from non-copyright material. The same would be true of whatever firms are forced to implement by Brussels. And so, plenty of material that in no way falls foul of copyright law will be caught by the filters. Given that the internet platforms now responsible for policing copyright have little reason to be anything other than risk averse when it comes to preventing infringement, overkill seems unavoidable under Article 13.
But the bigger problem is that identifying material shouldn’t be enough to automatically block its use. It’s not hard to think of harmless pictures that could be caught by a filter because of a logo on a t-shirt or a poster on the wall. There is also the question of what is known as fair dealing in UK copyright law and fair use elsewhere. These are the exceptions that allow people to use copyrighted material if they are doing so for research, criticism, review, parody or a number of other uses. A filter that automatically blocks copyrighted material would make no allowance for these important cases.