Want the platforms to police bad speech and fake news? The copyright wars want a word with you.

There are lots of calls for the platforms to police the bad speech on their platform -- disinformation and fake news; hate speech and harassment, extremist content and so on -- and while that would represent a major shift in how Big Tech relates to the materials generated and shared by its users, it's not without precedent.

For more than 20 years, online platforms have had a legal duty to police their users' copyright enforcements, responding to unproved accusations of copyright infringements by (usually) removing materials, generally without a moment's thought.

Everybody hates this: users, copyright holders, the platforms themselves. Now, maybe it doesn't have to be this way, but if we're going to ask the platforms to expand their policing duties without turning into (more of) a shitshow, it's worth considering the lessons we've learned from decades of copyright enforcement.

EFF's Legal Director Corynne McSherry offers five lessons to keep in mind:

1. (Lots of) mistakes will be made: copyright takedowns result in the removal of tons of legitimate content.

2. Robots won't help: automated filtering tools like Content ID have been a disaster, and policing copyright with algorithms is a lot easier than policing "bad speech."

3. These systems need to be transparent and have due process. A system that allows for automated instant censorship and slow, manual review of censorship gives a huge advantage to people who want to abuse the system.

4. Punish abuse. The ability to censor other peoples' speech is no joke. If you're careless or malicious in your takedown requests, you should pay a consequence: maybe a fine, maybe being barred form using the takedown system.

5. Voluntary moderation quickly becomes mandatory. Every voluntary effort to stem copyright infringement has been followed by calls to make those efforts mandatory (and expand them).

There's no question that the platforms have a problem with bad behavior and bad speech -- the question is what we should do about it. If we're going to put the platforms in charge of deciding what is and isn't acceptable speech, let's at least try to learn from the failures of the recent past.

Notably missing from most of these discussions is a sense of context. Fact is, there’s another arena where intermediaries have been policing online speech for decades: copyright. Since at least 1998, online intermediaries in the US and abroad have taken down or filtered out billions of websites and links, often based on nothing more than mere allegations of infringement. Part of this is due to Section 512 of the Digital Millennium Copyright Act (DMCA), which protects service providers from monetary liability based on the allegedly infringing activities of third parties if they “expeditiously” remove content that a rightsholder has identified as infringing. But the DMCA’s hair-trigger process did not satisfy many rightsholders, so large platforms, particularly Google, also adopted filtering mechanisms and other automated processes to take down content automatically, or prevent it from being uploaded in the first place.

Platform Censorship: Lessons From the Copyright Wars [Corynne McSherry/EFF Deeplinks]

from Boing Boing https://ift.tt/2xIl9X4
via IFTTT

Share on Google Plus

About Unknown

This is a short description in the author block about the author. You edit it by entering text in the "Biographical Info" field in the user admin panel.
    Blogger Comment
    Facebook Comment

0 comments:

Post a Comment