Intermediary liability safe harbor in Asia and AI-based Future at Google Legal Summit 10/11/2017

by | Oct 13, 2017 | Free Speech, Open Blog | 0 comments

Internet is a great equalizer and a great liberator.  Internet gives powerless individuals the same power of information and the same power of speech as big companies and governments have.  In order to protect this civilizational significance of the Internet, we need intermediary liability safe harbor.  As rarely as it happens, NGOs for this reason advocated for a rule protecting giant corporations from liability and we even tried to elevate to the level of human rights as you saw in the case of Manila Principles for Intermediary Liability.  The logic is that we should not require intermediaries to prescreen user contents. We call such requirement a “general monitoring” obligation, and there is a golden rule against general monitoring obligations.  It is okay if some intermediaries choose to engage in general monitoring but states should not require them to do so. General monitoring will mean that what remains on the net will remain under the tacit approval of intermediaries. The power of the Internet lies in the fact that users can speak freely without prior censorship of intermediaries. That users can talk to the whole world without persuading and obtaining approval from television stations or newspaper companies is the value of the Internet.  Now, the corollary of that rule is that intermediaries should not be held liable for contents unknown to them lest they will be incentivized into prescreening user contents in fear of liability.  Hence there should be liability shelter for unknown contents.

That is the international standard, and it has appeared in various forms.  Europe through Article 14 of E-Commerce Directive shielded platforms from liability for unknown infringing contents.  US took a compromise position on the copyright area where the liability shield will be given in exchange for following the notice and takedown procedure for all contents alleged to be infringing — importantly, followed by the reinstatement procedure for all contents counter-noticed to be lawful by the authors — while taking a more draconian position on non-copyright-offense-related contents by shielding intermediaries from liability even for known contents.

That standard has taken roots in various parts of the world including Asia, again in different forms.  India has had a comprehensive intermediary liability safe harbor like the US DMCA notice and takedown law and made it even stronger in 2015 through Shreya Singhal decision.  The decision narrowed the scope of notices of illegal contents that trigger the intermediaries’ duty to take down in case they would like to claim liability exemption.  It narrowed to only court orders:  Post-Shreya-Singhal, the intermediaries can claim liability exemption even after they receive notices of unlawful contents from private parties or even the police.  In a sense Shreya Singhal strengthened it to the level of CDA 230. (Beware: many people mistakenly believe that Shreya Singhal required all takedown orders to be court-order-based. That is not true.  Article 69A of IT Act, upheld in the same decision, clearly allows administrative agencies issuing blocking orders.  Its famous part was on Article 79 which governs what intermediaries must do IN ORDER TO CLAIM SHELTER from liability for user created contents.)

Japan has a comprehensive Provider Liability Law similar to E-Commerce Directive.  Korea?  In terrible confusion.  As you all know, the notice and takedown obligations are conditional in a sense that they kick in only when intermediaries want to claim liability exemption.  In Korea, these obligations were misinterpreted as absolute obligations whereby intermediaries MUST take down all contents that people alleged to be unlawful regardless whether they are actually unlawful.  Therefore, many lawful contents are taken down without any assurance that counter-notices will reinstate them, hurting freedom of speech.

On top of that, Korea has 3 separate provisions requiring intermediaries to install “technical measures” to keep out of the system (1) child pornography (2) obscenity and (3) piracy.  These provisions are up for constitutional review and have become the laughing stock of scholars because there are no such technical measures.  The only way to keep these contents out is for human eyes to monitor all contents coming on to the services.  It sets up a “general monitoring” obligation.  We are working to strike down these laws using the international standard I mentioned earlier.

Now, this leads me to a big question about the future of intermediary liability safe harbor.  What if AI is developed to a level enabling “technical measures” to keep out child pornography, obscenity, and piracy.  What if software is written, which can identify the images of children and identify sexual activities as opposed to sports activities?  Will we still say that such general monitoring by machine kills the power of the Internet by leaving on the net only the contents approved by the machine or will we say something else because it is not human eyes reviewing contents?  There is a parallel discussion in privacy.  Google used to scan the email contents to attach related banner ads.  These days Google seems to scan email contents to give suggested responses to choose from in replying to the emails.  All of these is considered okay because it does not involve human eyes.  When you take a shower and I go into your shower room, it is clearly a privacy violation.  However, what if it is a chair that appears into the shower room.   What if it is a dog?  Privacy violation seems to depend on not just what information has been revealed but also to whom.  Can we say the same thing about prior censorship?  If it is software as opposed to human eyes that engages in general monitoring, will we feel equally strongly that the remaining contents of the Internet are censored?  Especially from the regulators’ point of view, assuming that there is a software available that intermediaries can attach to their servers to keep from public access child pornography, obscenity, and piracy, how long can the regulators hold out against the temptation to require intermediaries to engage in machine-based general monitoring?  Why not if it can reduce child pornography and therefore child abuse attendant upon the production thereof?  Is the feeling of “being censored” more important than children’s mental health?  What will you do if the regulators come after you and hold you responsible for not attaching the AI-based software that points out child pornography?  How about AI-based software that identifies ISIS or white supremacist recruitment postings or other extremist content prone to violence? Will many states not want to require intermediaries to install this software?  What should be our response?

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *