On September 30, 2023, in Kuala Lumpr, Open Net collaborated with World Justice Project to provide consultation with the Malaysian government titled “The Asia Pacific Expert Meeting on Disinformation Regulation and the Free Flow of Information” where Open Net made a panel presentation in the session titled “Assessing Diverse Approaches to the Regulation of Disinformation (Session 2)” and moderated another session titled “Technology and Disinformation: How can technology be part of the solution? (Session 3)”.
In Session 2, K.S. Park shared the South Korean experience where penalties and takedown orders were the choice methods but also described how “false news” crime law was struck down as unconstitutional (2010), a regulatory proposal to start criminal defamation investigation even before complaints are filed was pushed back (2015), and a legislative proposal to impose punitive damages for new reporting was also pushed back with the intervention of Irene Khan, UNSR on Freedom of Expression (2021). The situation has worsened. In 2023, several media houses are being searched and seized for criminal defamation of the current president Yoon One and an One Strike Out Policy was announced by the internet censorship body Korean Communication Standards Commission whereby a media house’s online outlet can be shut down for issuing even one “fake news”. According to him, the Korean experience shows exactly why criminalization or content takedown cannot be a human rights-abiding method of responding to disinformation, because of the risk of abuse by governments. Evidence of such risk is ample: South Korea’s 1974 Emergency Decree No. 1 on “fake news” and Malaysia’s 2021 Emergency Ordinance on “fake news”, both of which were tools to suppress criticisms of the incumbent government and its extra-constitutional status. In doing so, he criticized the recent UNESCO Guide on Platform Responsibility for encouraging administrative censorship of online content. Most disinformation researchers are counting “Russian disinformation”, “Chinese disinformation” as the top topics, which means that it is the governments that are the main sources of disinformation. Penalties and takedowns are not a correct response. He says “disinformation is a technology problem, not a legal problem.” We need fully harm-based tech solutions where platforms play an active role whereby (1) disinformation is identified not by text but by context, such as method of propagation; (2) even contents protected by freedom of speech are deprioritized and demonetized voluntarily by platforms if they are harmful, i.e., Twitter’s Trust & Safety Council’s Vulnerable Persons Community Guideline. In order for that to happen, there has to be much more vigorous information exchange as civil society and platforms at local levels to understand the mechanics of harm.
In Session 3 moderated by K.S. Park, the panelists discussed further into the role that platforms must play in pushing back against especially majority-originated hate speech and government-sponsored disinformation. Transparency by platform was identified was one important tool that makes disinformation responses robust. K.S. Park again emphasized the need for “contextual moderation” and therefore the need for vigorous exchange between platforms and civil society to enrich the context, and that such moderation can be freed from the strictures of international human rights laws on freedom of speech.
Including Irene Khan UNSR and the officers of Human Rights Commission of Malaysia, there were about 10 government stakeholders out of about 100 attendees, 40% of them were women.
0 Comments