Featured – Coconet https://coconet.social A Platform for Digital Rights Movement Building in the Asia-Pacific Thu, 12 Aug 2021 04:30:39 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.1 https://coconet.social/wp-content/uploads/2019/07/favicon-150x150.png Featured – Coconet https://coconet.social 32 32 Statement: Repeal Indonesia law that imposes harsh intermediary liabilities, risks curtailing expression https://coconet.social/2021/statement-indonesia-mr5/ https://coconet.social/2021/statement-indonesia-mr5/#respond Fri, 11 Jun 2021 07:00:15 +0000 https://coconet.social/?p=6136 On May 28, 2021, members of the Coconet community were among 25 organisations that signed a statement calling on the Indonesian Ministry of Communication and Information Technology (Kominfo) to repeal Ministerial Regulation 5 (MR5), which can lead to “prepublication censorship” in its current state.

The post Statement: Repeal Indonesia law that imposes harsh intermediary liabilities, risks curtailing expression appeared first on Coconet.

]]>

Image by Michael Gaida from Pixabay. Used under a Pixabay License.

 

On May 28, 2021, members of the Coconet community were among 25 organisations that signed a statement calling on the Indonesian Ministry of Communication and Information Technology (Kominfo) to repeal Ministerial Regulation 5 (MR5), which can lead to “prepublication censorship” in its current state.

The law requires private electronic systems operators (ESOs), which include social media platforms like Facebook, Twitter, and TikTok, to monitor and remove “prohibited content”, as flagged by the Indonesian government. ESOs must also be registered in Indonesia. Failure to acquire a license from the ministry by December 2021 will lead to the platform being blocked in the country.

“This requirement for companies to proactively monitor or filter content is both inconsistent with the right to privacy and likely to amount to prepublication censorship”, the statement reads. The law, which came into effect on December 2 last year with little consultation, is also not clear about what constitutes prohibited content.

Read the full statement below:

 

May 28, 2021

Dear H.E. Johnny G. Plate,
Minister of Communication and Information Technology
Ministry of Communication and Information Technology, Indonesia

We, the undersigned, urge you to repeal Ministerial Regulation 5/2020 (MR5) that is deeply problematic, granting government authorities overly broad powers to regulate online content, access user data, and penalize companies that fail to comply.

MR5 governs all private “electronic systems operators” that are accessible in Indonesia, broadly defined to include social media and other content-sharing platforms, digital marketplaces, search engines, financial services, data processing services, and communications services providing messaging, video calls, or games. This new regulation will affect national and regional digital services and platforms, as well as multinational companies like Google, Facebook, Twitter, and TikTok.

These companies are required to “ensure” that their platform does not contain or facilitate the distribution of “prohibited content”, which implies that they have an obligation to monitor content. Failure to do so can lead to blocking of the entire platform. This requirement for companies to proactively monitor or filter content is both inconsistent with the right to privacy and likely to amount to prepublication censorship.

The regulation’s definition of prohibited content is extremely broad, including not only content in violation of Indonesia’s already overly broad laws restricting speech, but also any material “causing public unrest or public disorder” or information on how to provide access to, or actually providing access to, prohibited material. The latter includes Virtual Private Networks (VPNs), which allow a user to access blocked content and are routinely used by businesses and individuals to ensure privacy for lawful activities.

For “urgent” requests, MR5 requires the company to take down content within four hours. For all other prohibited content, they must do so within 24 hours of being notified by the Ministry. If they fail to do so, regulators can block the service or, in the case of service providers that facilitate user-generated content, impose substantial fines.

MR5 obliges every “Private Electronic System Operator” (Private ESO) to register and obtain an ID certificate issued by the Ministry before people in Indonesia start accessing its services or content.

Previously, registration must take place by May 24th, 2021, but later was postponed, based on a press conference held by Samuel Pangerapan as General Director APTIKA (Directorate of Application and Informatics) of Indonesia MICT, to 6 months until the Single Sign-On (SSO) is ready to be implemented.

Under MR5, Kominfo will sanction non-registrants by blocking their services. Those Private ESOs who decide to register must provide information granting access to their “system” and data to ensure effectiveness in the “monitoring and law enforcement process”. If a registered Private ESO disobeyed the MR5 requirements, for example, by failing to provide “direct access” to their systems (Article 7 (c)), it can be punished in various ways, ranging from a first warning, to temporary blocking, to full blocking and a final revocation of its registration.

Based on our analysis, MR5 does not comply with standards, legal theory or principles, but also does not uphold freedom of expression and other human rights.

The substance of MR5 includes the regulation of digital rights, including restrictions. Considering the right to privacy, it is clear that MR5 exceeds the limits given in Law 12/2011, because it is limited to the framework of “administering certain functions in the government”.  MR5 therefore has the potential to violate freedom of expression and other human rights.

The provisions in MR5 are potentially contrary to Article 12 of the Universal Declaration of Human Rights (UDHR) and Article 17 of the International Covenant on Civil and Political Rights (ICCPR), especially the provisions enabling authorities to obtain personal data from Private ESOs. These concerns are compounded by the absence of independent supervision in obtaining access to personal data, and the fact that in practice, personal data is often misused, especially by law enforcement officials.

The three-part test has not been strictly regulated in the legal mechanism in MR5, so practically, this arrangement opens up space for violations of human rights, particularly the right to privacy.

In MR5, the term “Access Termination”, interpreted as meaning both blocking access to the internet and takedown of an account or a social media post, is used 65 times. This has the potential to limit rights and freedoms, and is very likely to interfere with the interests of Private ESOs. Further, the standard of limitation for the termination of access to the internet is not clearly stipulated within MR5, leaving the powers to terminate access open to abuse and disproportionate application. The failure to include an adequate complaints mechanism further compounds concerns that termination of access will be utilised by authorities arbitrarily and excessively.

The phrase “prohibited” in Article 9 paragraphs (3) and (4) actually has a very wide range and its interpretation opens up space for debate, especially if there is a conflict of interest of State Institutions or law enforcement officials. For example, what is meant by “public disturbance”, what is the standard or measure, who has the authority to determine it, and what if the public feels that it is not part of what is called “disturbing the society”?

With regard to Chapter IV, Article 14, regarding requests for termination of access, it is necessary to consider the restriction standards stipulated in Article 19 paragraph (3) of the ICCPR, including considerations of the Human Rights Committee’s General Comment No. 34.

MR5 requires Private ESOs, including social media platforms and other online-based service providers to comply with domestic jurisdiction, both for content and the use of content in daily practice. The legal framework for such obligations weakens the protection of all social media platforms, applications, and other online service providers, especially to accept domestic jurisdiction over user data content and policies and practices. Such a legal framework becomes a repressive instrument that would contradict or even violate human rights.

We call on you to immediately repeal MR5.

Regards,

Access Now (International)

Amnesty International Indonesia (Indonesia)

Alliance of Independent Journalists (Indonesia)

ARTICLE 19

Digital Reach (Thailand)

Electronic Frontier Foundation (International)

EngageMedia (Australia)

ELSAM (Indonesia)

Free Expression Myanmar (Myanmar)

Foundation for Media Alternatives (Philippines)

Greenpeace Indonesia (Indonesia)

Human Rights Watch (International)

Indonesia Corruption Watch (Indonesia)

Indonesia Legal Aid Foundation (Indonesia)

Institute for Criminal Justice Reform (Indonesia)

Komite Perlindungan Jurnalis dan Kebebasan Berekspresi (Indonesia)

LBH Jakarta (Indonesia)

LBH Pers Jakarta (Indonesia)

Manushya Foundation (Thailand)

Open Net Association (South Korea)

Oxen Privacy Tech Foundation (OPTF) (Australia)

Perkumpulan Lintas Feminis Jakarta (Indonesia)

Southeast Asia Freedom of Expression Network (SAFEnet) (Indonesia)

TAPOL (United Kingdom)

Unit Kajian Gender dan Seksualitas LPPSP FISIP UI (Indonesia)

The post Statement: Repeal Indonesia law that imposes harsh intermediary liabilities, risks curtailing expression appeared first on Coconet.

]]>
https://coconet.social/2021/statement-indonesia-mr5/feed/ 0
Documenting During Internet Shutdowns: A WITNESS Guide in English and Indonesian https://coconet.social/2020/guide-internet-shutdown-witness/ https://coconet.social/2020/guide-internet-shutdown-witness/#respond Mon, 19 Oct 2020 14:12:05 +0000 https://coconet.social/?p=3733 Documenting human rights violations is as important as ever during an internet shutdown. WITNESS Asia writes this guide in English and Indonesian on how activists can safely capture and preserve their videos during an internet shutdown, and even share them offline.

The post Documenting During Internet Shutdowns: A WITNESS Guide in English and Indonesian appeared first on Coconet.

]]>

In January 2020, WITNESS published on its blog an English guide to documenting during internet shutdowns. Almost a year later, all five parts of the guide was made available in Bahasa Indonesia.

Coconet.social is republishing in both English and Bahasa Indonesian the first article of the series. WITNESS Senior Manager of Programs for Asia and Pacific Arul Prakkash, who is also a member of the Coconet community, contributed to the original guide and translation to Bahasa Indonesian. 

Click here to read the article in Indonesian.

Documenting During Internet Shutdowns

In June 2019, as human rights abuses and a humanitarian crisis were continuing in Myanmar, the country’s Ministry of Transport and Communication directed telecom companies to shut down their mobile internet service in parts of Rakhine State and neighbouring Chin State. Citing “disturbances of the peace” and “illegal activities,” the Myanmar government claims to have enacted the shutdown “for the benefit of the people.” In reality, the blackout cut over a million people off from access to essential information and communication and disrupted humanitarian efforts. As Matthew Smith from Fortify Rights has stated, “This shutdown is happening in a context of ongoing genocide against Rohingya and war crimes against Rakhine, and even if it were intended to target militants, it’s egregiously disproportionate.”

The shutdown was partially lifted on five of the townships in September 2019, but is ongoing. During the same month, in neighbouring Bangladesh where many Rohingya have fled, authorities ordered mobile phone operators to block 3G and 4G services in Rohingya refugee camps and to stop selling SIM cards to Rohingya. As we enter 2020, four townships in Rakhine continue to be cut off from the world, and Bangladesh continues to limit service in the refugee camps.

Documenting During Internet Shutdowns

“Internet shutdowns and human rights violations go hand in hand."

Berhan Taye, AccessNow
Globally, internet shutdowns are on the rise. According to AccessNow’s #KeepItOn campaign, there were 128 intentional shutdowns between January – July 2019, compared to 196 in all of 2018, and up sharply from 106 in 2017, and 75 in 2016. Around the world, governments, with the cooperation of telecom companies, are increasingly turning to internet shutdowns as a strategy to repress communities, prevent mobilization, and stop information about human rights violations from being documented and shared. Shutdowns can take various forms, including platform-specific blockages that target popular apps and sites, mobile data shutdowns, bandwidth throttling, or total internet blackouts. All of these types of shutdowns are intended to disrupt the ability to communicate information and expose violations in real-time. They often occur during protests, elections, and periods of political instability, and are often accompanied by heightened state repression, military offensives, and violence. While governments may try to justify shutdowns in the name of “public safety” or other reasons, shutdowns clearly take place at moments when repressive states fear losing tenuous control over their people, information, or political narrative. Shutdowns violate human rights, severely disrupt people’s lives and livelihoods, and also have a global economic impact.
Types of internet shutdowns. Photo from WITNESS, used with permission.

Documenting human rights violations is as important as ever during an internet shutdown. Even if information cannot be shared in the moment, documentation can be a way to preserve voices that authorities are trying to silence and to secure evidence of abuses that can be used to demand accountability later on. Of course, the repressive context and the technological impediments of an internet shutdown make documenting violations—and maintaining that documentation securely—much more challenging and risky. How can activists capture and preserve their videos during a shutdown, and even share them offline, and do so in safer ways? 

This series

Through our work with activists who have experienced internet shutdowns, we have learned some useful tips and approaches to capturing and preserving video documentation during internet shutdowns that we are sharing in this series. We wrote them with Android devices in mind, but the tips can be applied to iPhones as well. Some of the strategies require planning (and often, internet access), so it’s a good idea to review them and implement any steps before you are in a situation where you do not have internet and you need to document. Save a copy of any of the tutorials so you can refer to them or share them during a shutdown. And finally, start practising the techniques and methods in your everyday work so that they become second-nature before you’re in a crisis.

Photo from original guide, used with permission.

One final note: While these tips can help you continue documenting in the face of a shutdown, we want to emphasize that the ultimate solution must be to restore internet access and successfully defend people’s right to record, and freedom of expression, information, and assembly. Fortunately, there is a global movement led by organizations like NetBlocks, AccessNow, and many others who are actively monitoring and sharing information about shutdowns. Advocates globally are engaging in strategic litigation against shutdowns. We stand in solidarity with their work to uphold human rights.

Cara Pendokumentasian Selama Pemadaman Internet

Pada bulan Juni 2019, saat pelanggaran HAM dan krisis kemanusiaan terus berlangsung di Myanmar, Menteri Perhubungan dan Komunikasi negara tersebut memerintahkan perusahaan telekomunikasi untuk memadamkan layanan internet seluler di wilayah Rakhine dan tetangganya Chin. Pemerintah Myanmar mengklaim melakukan pemadaman (shutdown) “untuk kepentingan umum”, menyebutnya sebagai “gangguan pada perdamaian” dan “aktivitas ilegal”. Pada kenyataannya, pemadaman internet terhadap sejuta orang itu memotong akses ke informasi dan komunikasi mendasar  serta mengganggu upaya kemanusiaan. Seperti pernyataan yang disampaikan Matthew Smith dari Fortify Rights, “Shutdown ini terjadi dalam konteks berlangsungnya genosida atas etnis Rohingya dan kejahatan perang terhadap Rakhine, dan bahkan jika ini ditujukan untuk menarget militan, tindakan ini jelas-jelas tidak sesuai proporsi.”

Pemadaman ini dipulihkan sebagian di 5 kota kecil pada September 2019, tapi masih terus berlangsung. Di bulan yang sama, di negeri tetangga Bangladesh di mana banyak suku Rohingya mengungsi, pemangku kekuasaan memerintahkan operator ponsel untuk memblokir layanan 3G dan 4G di kamp pengungsian Rohingya dan berhenti menjual kartu SIM kepada suku Rohingya. Memasuki tahun 2020, 4 kota kecil di Rakhine terus mengalami pemotongan akses dari dunia, dan Bangladesh terus membatasi layanan servis di kamp-kamp pengungsian.

Pendokumentasian Selama Pemadaman Internet

“Pemadaman internet dan pelanggaran hak asasi manusia berjalan beriringan.”

Berhan Taye, AccessNow

Secara global, pemadaman internet terus meningkat. Berdasarkan kampanye #KeepItOn AccessNow, ada 128 pemadaman yang disengaja selama bulan Januari-Juli 2019, dibandingkan dengan total 196 pada 2018, dan meningkat tajam dari tahun 2017 sebanyak 106 pemadaman, dan 75 pada tahun 2016. Di seluruh dunia, pemerintah bersama perusahaan telekomunikasi, melakukan pemadaman internet sebagai strategi untuk menekan masyarakat, mencegah mobilisasi, serta menghentikan penyebaran dan pendokumentasian informasi terkait pelanggaran hak asasi manusia.

Pemadaman internet bisa dilakukan dalam berbagai bentuk, termasuk pemblokiran terhadap platform spesifik yang menargetkan aplikasi dan situs populerpemadaman data selulerpembatasan bandwidth, atau pemadaman total internet. Semua jenis shutdown ini bertujuan untuk mengganggu  penyampaian informasi dan pengungkapan berbagai pelanggaran secara real-time. Hal ini sering terjadi selama unjuk rasa, pemilihan umum, dan periode ketidakstabilan politik, serta seringkali disertai dengan meningkatnya penindasan oleh negara, serangan militer dan kekerasan. Walaupun pemerintah mencoba untuk membenarkan shutdown atas nama keamanan publik atau alasan lainnyashutdown jelas dilakukan pada saat negara takut kehilangan kendali atas masyarakat, informasi, atau narasi politik. Shutdowns melanggar hak asasi manusia, sangat mengganggu kehidupan dan mata pencaharian, serta berdampak pada ekonomi global.

Types of internet shutdowns. Photo from WITNESS, used with permission.

Mendokumentasikan pelanggaran HAM sama pentingnya selama pemadaman internet. Bahkan jika informasi tidak dapat disebarkan pada saat itu, dokumentasi dapat menjadi cara untuk menjaga suara-suara yang berusaha dibungkam pihak berwenang, serta untuk mengamankan bukti pelanggaran yang dapat digunakan untuk menuntut pertanggungjawaban di kemudian hari. Proses pendokumentasian pelanggaran dan upaya menjaga dokumentasi ini tentu saja menjadi lebih menantang dan berisiko karena represi dan hambatan teknologi selama internet shutdownBagaimana para aktivis bisa mengambil dan menyimpan video mereka selama shutdown, membagikannya secara offline dan melakukannya dengan lebih aman?

Dalam Seri Ini

Melalui kerja sama dengan para aktivis yang telah mengalami pemadaman internet, kami mempelajari beberapa tips dan pendekatan yang berguna untuk mengambil dan menyimpan dokumentasi video selama internet shutdown yang akan dibagikan melalui seri ini. Kami menulis tips ini untuk gawait Android, tetapi tips tersebut juga bisa diterapkan untuk iPhone. Beberapa strategi membutuhkan perencanaan terlebih dulu (dan seringkali, akses internet). Jadi sebaiknya baca, coba, dan terapkan dulu sebelum berada dalam situasi di mana sulit mendapatkan akses internet padahal harus melakukan pendokumentasian. Simpan salinan dari setiap tutorial sehingga bisa dirujuk dan dibagikan selama shutdown. Terakhir, mulailah mempraktikkan teknik dan metode berikut dalam kegiatan sehari-hari, sehingga menjadi kebiasaan sebelum berada dalam krisis.

Photo from original guide, used with permission.

Catatan akhir: Meskipun tips tersebut dapat membantu pendokumetasian selama pemadaman internet, kami menekankan bahwa solusi akhir adalah harus memulihkan akses internet dan berhasil membela hak masyarakat untuk merekam, serta kebebasan berekspresi, informasi dan berkumpul. Untungnya, ada gerakan global yang dipimpin oleh organisasi seperti NetBlocksAccessNow dan lainnya yang secara aktif memantau dan berbagi informasi terkait shutdown. Para advokat secara global juga terlibat dalam litigasi strategis terhadap shutdown. Kami berdiri dalam solidaritas dengan kerja-kerja mereka untuk menegakkan hak asasi manusia.

ABOUT THE AUTHORS

WITNESS Asia is the Asia-Pacific branch of WITNESS, an international organisation that supports people using video in their fight for human rights. Access more guides on the WITNESS blog.

The post Documenting During Internet Shutdowns: A WITNESS Guide in English and Indonesian appeared first on Coconet.

]]>
https://coconet.social/2020/guide-internet-shutdown-witness/feed/ 0
บทความที่เกี่ยวกับ AI ที่มีผลต่อประเด็นสิทธิมนุษยชนในภูมิภาคเอเชียตะวันออกเฉียงใต้ แปลเป็นภาษาไทยแล้ว https://coconet.social/2020/artificial-intelligence-thai-translation-th/ https://coconet.social/2020/artificial-intelligence-thai-translation-th/#respond Tue, 18 Aug 2020 09:04:22 +0000 https://coconet.social/?p=5533 ด้วยประโยชน์ของการแปลงานในประเด็นที่เกี่ยวข้องกับเทคโนโลยีเช่นนี้ย่อมช่วยเผยแพร่ความรู้แก่ผู้ที่สนใจในภูมิภาคของเรา ซึ่งมีความหลากหลายทั้งทางภาษาและวัฒนธรรม จึงได้เลือกแปลบทความทั้งสามชิ้นของ Jun-E Tan ที่เกี่ยวกับสถานการณ์ของปัญญาประดิษฐ์ (Artificial Intelligence: AI) ในภูมิภาคเอเชียตะวันออกเฉียงใต้เป็นภาษาไทย โดยธีรดา ณ จัตุรัส ผู้ที่ยินดีอาสาสมัครมาช่วยแปลบทความในซีรี่ย์ที่เกี่ยวกับ AI ซึ่งปัจจุบันธีรดาทำงานเป็นที่ปรึกษาทางด้านการวางแผนนโยบายทางด้านการศึกษาของ UNESCO Paris หลังจากจบการศึกษาทางด้านการสื่อสารดิจิทัล ด้วยปริญญาปรัชญา (MPhil in Digital Communications) จากมหาวิทยาลัย University of Westminster ในลอนดอน

The post บทความที่เกี่ยวกับ AI ที่มีผลต่อประเด็นสิทธิมนุษยชนในภูมิภาคเอเชียตะวันออกเฉียงใต้ แปลเป็นภาษาไทยแล้ว appeared first on Coconet.

]]>

Read this article in English / อ่านบทความนี้ใน ภาษาอังกฤษ

บ่อยครั้งการแปลภาษาอังกฤษจากแหล่งความรู้ต่างๆ มักไม่ง่ายนักและไม่สามารถแปลได้แบบตรงๆ โดยเฉพาะเมื่อศัพท์เทคนิคเหล่านั้นเกี่ยวกับทางด้านเทคโนโลยีดิจิทัล ความหมายของศัพท์นั้นๆ จึงแปลแบบตรงตัวตามที่ปรากฏในภาษาอื่นๆ ไม่ได้ อย่างไรก็ดี ได้ปรากฏว่ามีความต้องการงานแปลงานทางเทคนิคเช่นนี้เพื่อเป็นประโยชน์ในการเผยแพร่ความรู้แก่ผู้ที่สนใจในภูมิภาคของเราซึ่งมีความหลากหลายทั้งภาษาและวัฒนธรรม

จากที่ได้มีการแปลบทความ “วิธีรักษาความปลอดภัยในโลกออนไลน์ 101” ในหลายๆ ภาษา รวมทั้งการแปลเป็นภาษาจีนมาแล้วในอดีต ซึ่งจริงๆแล้วการแปลบทความนี้เป็นผลงานของมูลนิธิ Open Culture Foundation (OCF) ในประเทศไต้หวันซึ่งเป็นส่วนหนี่งของชุมชน Coconet ของเรา

หลังจากนั้นเราได้เลือกแปลบทความทั้งสามชิ้นที่เกี่ยวกับสถานการณ์ของ AI ในภูมิภาคเอเชียตะวันออกเฉียงใต้ที่เขียนโดย Jun-E Tan และแปลเป็นภาษาไทย โดย ธีรดา ณ จัตุรัส ผู้ที่ยินดีอาสาสมัครมาช่วยแปลบทความในซีรี่ย์ที่เกี่ยวกับ AI ซึ่งปัจจุบันธีรดาทำงานเป็นที่ปรึกษาทางด้านการวางแผนนโยบายทางด้านการศึกษาของ UNESCO Paris หลังจากจบการศึกษาทางด้านการสื่อสารดิจิทัล ด้วยปริญญาปรัชญา (MPhil in Digital Communications) จากมหาวิทยาลัย University of Westminster ในลอนดอน

ข้อความด้านล่างนี้เป็นความเห็นจากธีรดา เกี่ยวกับประสบการณ์ในการแปลจากไทยเป็นอังกฤษของบทความ AI ชุดนี้

“จากประสบการณ์ในการทำวิจัยทางด้านเศรษฐศาสตร์การเมืองในการสื่อสาร ทำให้ได้เห็นถึงความคล้ายคลึงหลายประการที่ปรากฏในบทความ AI และในงานวิทยานิพนธ์ของตนเอง ด้วยเหตุนี้ตนเองจึงมีความคุ้นเคยกับคำศัพท์เทคนิคเหล่านี้อยู่แล้วจึงทำให้แปลบทความได้เร็วขึ้น แถมยังช่วยให้ตนเองได้เรียนรู้ศัพท์เทคนิคใหม่ๆอีกด้วย ทั้งนี้บทความ AI ชุดนี้ยังได้สะท้อนถึงผลการวิจัยของงานวิทยานิพนธ์ของตนเองในหลายๆ ด้านที่เกี่ยวกับการเพิ่มการควบคุมอินเตอร์เน็ตที่มาจากความร่วมมือของรัฐเผด็จการและบริษัททางเทคโนโลยีต่างๆ ซึ่งตนเห็นว่ามีความจำเป็นในการให้ความสำคัญเรื่องความปลอดภัยทางไซเบอร์และความเป็นส่วนตัวบนโลกออนไลน์ผ่านทางหลักสูตรของโรงเรียน สถาบันการศึกษา และการสร้างทางเลือกอื่นๆ ในการสื่อสารที่ไม่ใช่แค่จากแพลตฟอร์มโซเชียลมีเดียกระแสหลัก เพื่อการรักษาความเป็นนิรนามในการปกปิดอัตลักษณ์ของคนใช้อินเตอร์เน็ตเอง พร้อมทั้งยังช่วยปกป้องข้อมูลส่วนตัวของผู้ใช้เอง”

“ซึ่งการที่ได้อาสาแปลบทความครั้งนี้ช่วยให้ตนเองได้ทำความเข้าใจประเด็นใหม่ๆเกี่ยวกับ AI ที่มีผลต่อการพัฒนาทางสังคม และยังทำให้ตนเองได้รับรู้เกี่ยวกับการพัฒนาใหม่ๆที่เกี่ยวกับการประยุกต์ใช้ AI ในภูมิภาคเอเชียตะวันออกเฉียงใต้ ซึ่งมีผลกระทบต่อสิทธิทางพลเมือง สิทธิทางการเมือง และสิทธิทางสังคมและวัฒนธรรมต่างๆของผู้คน โดยเฉพาะผู้ที่เห็นต่างจากรัฐ ผู้หญิง และเยาวชน นั่นทำให้ได้ไอเดียใหม่ๆที่ช่วยพัฒนาทั้งทางวิชาการและการทำงานของตนเองในอนาคต”

ธีรดา ยังได้อธิบายเพิ่มเติมถึงความจำเป็นในการเพิ่มแหล่งความรู้ที่เกี่ยวกับ AI และ Machine Learning ในภาษาอื่นๆ ที่นอกเหนือไปจากภาษาอังกฤษเพื่อการเผยแพร่ความรู้ในภูมิภาคนี้ “ไม่ว่าจะเป็นผู้ที่คิดต่างทางการเมือง ผู้ลี้ภัย บุคคลที่มีความต้องการช่วยเหลืออย่างเป็นพิเศษ เด็กผู้หญิง ผู้หญิง และชนกลุ่มน้อยต่างๆ ที่ขาดความชำนาญทางภาษาอังกฤษควรที่จะได้เข้าถึงข้อมูลที่เกี่ยวกับความสำคัญในการรักษาความเป็นส่วนตัวบนโลกออนไลน์ และสิทธิทางดิจิทัล (digital rights) ในภาษาของตน ซึ่งการรู้สิทธิต่างๆ เหล่านี้ก็จะช่วยให้พวกเราสามารถลุกขึ้นปกป้องสิทธิของตนเองจากการละเมิดสิทธิโดยรัฐ และบริษัทเทคโนโลยีทั้งหลาย”

ขอบคุณธีรดาที่อาสาช่วยแปลงานเป็นภาษาไทย และสำหรับผู้ที่สนใจเป็นอาสาสมัครแปลบทความเป็นภาษาต่างๆ สามารถติดต่อทีมงานเรา เพื่อสอบถามข้อมูลเพิ่มเติมได้

The post บทความที่เกี่ยวกับ AI ที่มีผลต่อประเด็นสิทธิมนุษยชนในภูมิภาคเอเชียตะวันออกเฉียงใต้ แปลเป็นภาษาไทยแล้ว appeared first on Coconet.

]]>
https://coconet.social/2020/artificial-intelligence-thai-translation-th/feed/ 0
Fighting the COVID-19 ‘Infodemic’ in the Asia-Pacific https://coconet.social/2020/covid-infodemic-asia-pacific/ https://coconet.social/2020/covid-infodemic-asia-pacific/#comments Wed, 01 Apr 2020 03:49:21 +0000 https://coconet.social/?p=1015 While governments and health workers worldwide are focused on combatting the COVID-19 pandemic, they are also busy fighting another related pandemic that cuts across all sectors of society: a massive “infodemic” equally as wide-reaching and harmful.

The post Fighting the COVID-19 ‘Infodemic’ in the Asia-Pacific appeared first on Coconet.

]]>
Photo by 🇨🇭 Claudio Schwarz | @purzlbaum on Unsplash.
Photo by 🇨🇭 Claudio Schwarz | @purzlbaum on Unsplash. Used under a Unsplash license.

While governments and health workers worldwide are focused on combatting the COVID-19 pandemic, they are also busy fighting another related pandemic that cuts across all sectors of society: a massive “infodemic” equally as wide-reaching and harmful.

The World Health Organisation (WHO) describes this infodemic as “an over-abundance of information – some accurate and some not – that makes it hard for people to find trustworthy sources and reliable guidance when they need it”.

Verified and timely information is more important than ever – but is also more challenging to come by. The global frontliners in this fight against mis- and disinformation on the coronavirus include:

Similar efforts are taking place in the Asia-Pacific, where region- and country-specific groups are relying on constant, collective fact-checking to combat the infodemic. We want to highlight some of them in this post.

In the Philippines, for example, journalists are sharing their best practices on how to accurately report on the pandemic. Internews also funded a 3-part video series on how Philippine fact-checking organisation VERA Files is combatting the COVID-19 infodemic. You can watch the three short videos below or on Engagemedia.org.

The Coronavirus infodemic flooded our screens as the epidemic amassed victims, spreading fear and misunderstanding among people all over the world.
What sort of disinformation contributed to the COVID-19 infodemic? VERA Files Fact Check debunks inaccurate claims about bats and a false report about an alleged positive coronavirus case in Cebu in this video.
Do face masks work? VERA Files Fact Check explains how to protect yourself from COVID-19 in this video. This is the last of VERA's three-part video series.

A similar Internews project is present in India, where partners are continuously conducting fact checks on rumours related to COVID-19.

Image by Gerd Altmann from Pixabay. Used under a Pixabay license.
Image by Gerd Altmann from Pixabay. Used under a Pixabay license.

In Malaysia, there is ample misinformation being shared online – such as one viral video claiming that coronavirus would make people behave like zombies. Malaysian media organisation The Star regularly debunks such false information on the pandemic.

In Indonesia, CekFakta is also at the forefront of debunking false information on the virus, including myths that drinking garlic boiled in water can cure you. The collective fact-checking and verification project is in collaboration with the Indonesian Cyber Media Association, the Indonesian Anti-Slander Society, and the Alliance of Independent Journalists.

In Taiwan, Taiwan Fact-Check Center has a dedicated project for COVID-related mis- and dis-information.

In Myanmar, the Ministry of Health and Sports (MOHS) is providing the latest information on COVID-19 on its website to combat countless fake news stories and hoaxes spreading in Myanmar. The MOHS is also raising public awareness through videos on how the medical staff and the general public can stay safe.

The BBC is also teaching citizens in Myanmar how to fight the infodemic through Thangyat or traditional folk music. It is also supporting similar efforts in Indonesia, India, Cambodia, and Nepal.

BBC is teaching citizens in Myanmar how to fight the infodemic through Thangyat or traditional folk music.

As this infodemic – arguably the first true social media infodemic of our time – continues with no clear end in sight, more and more initiatives will surely start and grow. It is up to us to stay informed and do our part to sustain these initiatives, else we ultimately lose in the broader fight against disinformation.

About the Author

Sara Pacia is the Communications and Engagement Coordinator of EngageMedia. A journalist by training and multimedia storyteller at heart, she is passionate about utilising and appropriating today’s digital technologies for the empowerment of the public and the improvement of media and data literacy.

The post Fighting the COVID-19 ‘Infodemic’ in the Asia-Pacific appeared first on Coconet.

]]>
https://coconet.social/2020/covid-infodemic-asia-pacific/feed/ 4
How Health Security and Individual Privacy Can Go Hand in Hand https://coconet.social/2020/covid-health-security-privacy-thailand/ https://coconet.social/2020/covid-health-security-privacy-thailand/#comments Mon, 30 Mar 2020 04:10:47 +0000 https://coconet.social/?p=958 COVID-19 has raised some serious questions: Can privacy and public health security go hand in hand? Is it enough to use safeguards such as transparency and the use of intrusive technology only when absolutely necessary?

The post How Health Security and Individual Privacy Can Go Hand in Hand appeared first on Coconet.

]]>
In Thailand, a government website updates citizens on the number of cases in the country.

It has been two months since the World Health Organization (WHO) declared the COVID-19 pandemic a global health emergency. Many governments have resorted to digital measures to supplement response efforts. Privacy International has been tracking such digital surveillance, which have so far included the following measures:

More importantly, COVID-19 has raised some serious questions: Can privacy and public health security go hand in hand? Is it enough to use safeguards such as transparency and the use of intrusive technology only when absolutely necessary?

We’ll look at how some countries are answering these questions and see if Thailand can do the same.

What digital solutions have other countries done?

Technology has become a crucial part of the COVID-19 response. We are also seeing a collaboration between the public and private sectors to develop digital solutions to the public health emergency.

In South Korea, Taiwan, and Singapore, efficient government responses to COVID-19 all depended on technology, supporting other well-planned measures, to deliver swift assistance to their citizens. On top of providing adequate COVID-19 testing kits, these measures succeeded because their citizens cooperated. But to reach that point, the government must cultivate trust with their citizenry.

We will balance the value of protecting individual human rights and privacy and the value of upholding public interest in preventing mass infections

Jung Eun-kyeong, director of the Korea Center for Disease Control and Prevention

Here, transparency in communication is vital.

South Korea, in particular, had to learn that lesson the hard way. In January 2020, the government began posting the detailed location histories, including even personal information, of each person who tested positive for COVID-19. But internet users quickly exploited the disclosed patient data, publicly identifying and hounding these patients. The social stigma attached to the virus prompted the government to acknowledge that this measure, even if well-meaning, was an invasion of privacy that could also discourage citizens from getting tested.

To remedy this, health officials announced this month that they would refine their data-sharing guidelines to minimize patient risk. “We will balance the value of protecting individual human rights and privacy and the value of upholding public interest in preventing mass infections,” said Jung Eun-kyeong, director of the Korea Center for Disease Control and Prevention.

Privacy and health security: Can’t we have both?

Regardless of whether you trust your government with your data, there still remains the challenge of protecting public interest while protecting individual privacy during a public health crisis. Whether it’s collecting or publishing, personal health information is sensitive data. But in a crisis, ensuring public health is also essential.

But this in no way means that the government or the press have to reveal detailed personal data that’s irrelevant to stopping the virus’ spread. Governments should uphold the principle of data minimisation, identifying clear time-frames for data collection, and ensuring that collection is only to control the spread of the coronavirus. Insensitive collection and publishing of personal data might lead to stigma against those with COVID-19, and even patients under investigation.

One good recommendation to follow comes from the Electronic Frontier Foundation. In Thailand, WHO has come out with guidelines to prevent and deal with social stigmatization.

The bottom line is this: If the government cannot create trust and safety for the people, some citizens might conceal their infection to avoid discrimination and stigmatisation. This could lead to people avoiding screenings and testings, which will then lead to more, not less, infections.

Under certain circumstances, rights to privacy may be compromised for the public good. But protecting individual privacy, too, is essential to public interest.

Public health surveillance can be done for the planning, implementation, and evaluation of public health practice. In 2017, WHO published its guidelines on ethical issues in public health surveillance. The guidelines notably do not mention much on digital monitoring and collection of data, or even the use of individual digital data for purposes other than public health.

Security or privacy: What do the Thais have?

In various situations, one problem with privacy in the digital era is that we are not aware of being tracked in the first place. We do not know how much our data has been and is being collected.

This is currently the case in Thailand. After the government passed a decree on March 26, 2020 declaring a state of emergency to combat COVID-19, applications and platforms are now being used to track coronavirus carriers and suspected patients under investigation.

The Thai government announced that they will be using three main platforms to track down the infection:

  • AOT Airports is an application that has been used before, but is now being modified to screen and monitor all travellers from at-risk countries who arrive in Thailand, to track whether they are following the 14-day self-quarantine measure.
  • covid19.ddc.moph.go.th is a government website that regularly reports important information about the pandemic, such as the number of infected individuals.
  • @sabaideebot on LINE Official is a government chatbot for people who have taken a COVID-19 test. Once users have received their test result and filled in their health status, they will be connected to the platform.
The AOT Airport app, while handy for travellers, can also be used to track their whereabouts.

With these three platforms, there is a risk that too much personal data is being collected. AOT Airports, for example, asks for information that may be irrelevant to tracking COVID-19 cases, such as all third-party accounts from Google to Line. This data is also at risk of being shared with third parties outside the government, such as private tourism and trading agencies.

We must therefore pay very close attention: What personal data is being collected? Is it relevant to public health interest? Who has access to the data? Where do we draw the line, especially between what’s public and what’s private?

What will the world after coronavirus look like?

Under certain circumstances, rights to privacy may be compromised for the public good. But protecting individual privacy, too, is essential to public interest.

Once this pandemic ends, we will need to review the technology used by the Thai government under the emergency decree. We also need to monitor how other countries will review the technology they used during this crisis. also We need only look to the United States as an example, which continues to use highly secretive mass surveillance systems after the 9/11 terrorist attacks.

Yuval Noah Harari, author of the book “Sapiens: A Brief History of Humankind”, asks us to think about the world after the COVID-19 crisis. If we are not careful, the technology used to monitor citizen health will give legitimacy to a terrifying new surveillance system in the long run.

Even though we agree on the necessity of emergency measures to respond to the ongoing health crisis, such emergency measures must be truly necessary and stay only temporarily. It must not outlive the health crisis.

It is important to note that enforcement of severe measures is not the only way to make citizens follow the government’s instructions. Instead, creating trust and communicating transparently are the true keys to crisis management.

In the time of COVID-19, we have to change the question. Instead of asking whether health security and privacy can go hand in hand, we must ask: How can we prioritise both without sacrifice one for the other?

Otherwise, privacy and other human rights – both online and offline – will suffer in the long run.

“ความมั่นคง(ทางสุขภาพ)” และ “ความเป็นส่วนตัว” ไปด้วยกันได้ ?

covid19.ddc.moph.go.th

เป็นเวลาครบสองเดือนที่ทางองค์การอนามัยโลก (WHO) ประกาศให้การแพร่ระบาดของไวรัสโควิด-19เป็นภาวะฉุกเฉินระดับโลก วิกฤติครั้งนี้สะท้อนให้เห็นหลายแง่มุมของการรับมือของแต่ละรัฐบาลและการเลือกนำเทคโนโลยีดิจิทัลมาใช้ โดยองค์กร Privacy International ได้เก็บรวบรวมข้อมูลการใช้เทคโนโลยีติดตามสอดส่องของรัฐบาลทั่วโลกในช่วงการระบาดของไวรัสโควิด-19 ตัวอย่างเช่น

    • การติดตั้งระบบติดตามกลุ่มบุคคลที่อาจติดเชื้อโควิด-19 ผ่านข้อมูลตำแหน่งที่อยู่จากโทรศัพท์แบบเรียลไทม์ที่หลายประเทศกำลังใช้อยู่ ตั้งแต่กลุ่มประเทศในเอเชียไปจนถึงหลายประเทศในยุโรป
    • การเก็บข้อมูลการใช้งานอุปกรณ์ที่เชื่อมต่ออินเตอร์เน็ต หรือ Internet of Things (IoT), CCTV, โดรน
    • การใช้ AI และเทคโนโลยีการจดจำใบหน้าเพื่อวิเคราะห์และระบุกลุ่มผู้ติดเชื้อได้อย่างรวดเร็ว
    • การติดตั้งแอปพลิเคชันรายงานตัวบนสมาร์ทโฟน รวมไปถึงการบังคับให้ถ่ายรูปเซลฟี่รายงานตัวเป็นประจำในช่วงกักตัว เช่น ประเทศโปแลนด์

ข้อถกเถียงสำคัญของการใช้เทคโนโลยีสอดส่องเพื่อควบคุมการแพร่ระบาดของไวรัสในโลกยุคดิจิทัล นำมาสู่คำถามที่ตอบยาก ถ้าต้องเลือกระหว่าง “ความเป็นส่วนตัว” หรือ “ความมั่นคงทางสุขภาพ” แต่เป็นไปได้แค่ไหนถ้าทั้งคู่จะอยู่ร่วมกันในภาวะวิกฤตินี้ หลักการและเหตุผลใดเพียงพอที่จะทำให้เรารักษาทั้งประโยชน์สาธารณสุขส่วนรวม และสิทธิความเป็นส่วนตัว บทเรียนจากประเทศไหนพอจะทำให้ไทยเรียนรู้ได้บ้างในวิกฤติฉุกเฉินครั้งนี้

ทางออกวิกฤติที่สำเร็จของประเทศฝั่งเอเชีย

บทบาทของเทคโนโลยีกลายเป็นหนึ่งใน “ทางออกสำคัญ” เพื่อรับมือโรคระบาดครั้งนี้ของหลายรัฐบาลทั่วโลก และเรายังเห็นความร่วมมือระหว่าง “ภาครัฐ” และ “ภาคเอกชน” ที่ช่วยกันพัฒนาเครื่องมือและแพลตฟอร์มทางดิจิทัล “เฉพาะกิจ” เพื่อรับมือวิกฤติเฉพาะหน้าอย่างเร่งด่วน

เกาหลีใต้ ไต้หวัน และสิงคโปร์ กลายเป็นบทเรียนความสำเร็จที่ควบคุมการแพร่ระบาดไวรัสโควิด-19 ได้ดีและมีประสิทธิภาพกว่าหลายประเทศในขณะนี้ ต่างมีปัจจัยเทคโนโลยีอยู่เบื้องหลัง แต่หลายประเทศที่รับมือวิกฤติได้ดีนั้นยังมีการวางแผนรับมืออย่างรอบด้านและการบริหารจัดการช่วยเหลือประชาชนที่รวดเร็วและเด็ดขาด ทั้งความพร้อมของจำนวนชุดเครื่องตรวจหาเชื้อโควิด-19 การสื่อสารต่อสาธารณะด้วยข้อมูลอย่าง “โปร่งใส” และชี้แจงมาตรการอย่าง “ชัดเจน” ถือเป็นปัจจัยหลักช่วยให้ประชาชน “มีความไว้ใจ” และปฏิบัติตามมาตรการของรัฐบาล

บทเรียนของ “เกาหลีใต้” สะท้อนให้เห็นความสำคัญของทั้งการปกป้อง “ความเป็นส่วนตัว” และ “มาตรการควบคุมการแพร่ระบาดของไวรัส”

เราจะรักษาสมดุลระหว่างการรักษาสิทธิความเป็นส่วนตัวและการรักษาประโยชน์ส่วนรวมในการป้องกันการแพร่ระบาดของไวรัส

Jung Eun-kyeong, ผู้อำนวยการศูนย์ควบคุมและปกป้องโรคติดต่อของเกาหลีใต้กล่า

ในเดือนมกราคมที่ผ่านมา หลังจากมีการใช้เทคโนโลยีติดตามกลุ่มผู้ติดเชื้อโควิด-19 และหน่วยงานทางการของเกาหลีใต้ได้เริ่มเปิดเผยข้อมูลของผู้ติดเชื้อไวรัสอย่างละเอียด ทั้งประวัติการเดินทาง เวลาออกจากที่ทำงาน ใส่หน้ากากป้องกันตอนขึ้นรถไฟหรือไม่ เปลี่ยนสถานีรถไฟที่ไหน ชื่อคาราโอเกะที่ไปและชื่อคลินิคที่ไปตรวจเชื้อไวรัส และเพียงไม่นานที่รัฐบาลออกข้อมูลรายละเอียดของผู้ติดเชื้อ สังคมออนไลน์เกาหลีใต้ก็ช่วยกันทำงานอย่างเร่งด่วน เพื่อระบุตัวตนและชื่อของผู้นั้น นำไปสู่การไล่ล่าหาตัวพวกเขาจากข้อมูลที่ถูกเปิดเผยในอินเตอร์เน็ต ด้วยความกังวลเรื่องความเป็นส่วนตัว นี้จึงทำให้ประชาชนบางส่วนไม่อยากเข้ารับการตรวจเชื้อไวรัส

เหตุการณ์ครั้งนี้ทำให้รัฐบาลเกาหลีใต้ประกาศแนวทางปฏิบัติในการแชร์ข้อมูลส่วนบุคคลและความโปร่งใสของการเก็บข้อมูลดิจิทัล เพื่อลดความเสี่ยงต่อผู้ติดเชื้อไวรัสและกลุ่มผู้เฝ้าระวัง “เราจะรักษาสมดุลระหว่างการรักษาสิทธิความเป็นส่วนตัวและการรักษาประโยชน์ส่วนรวมในการป้องกันการแพร่ระบาดของไวรัส” Jung Eun-kyeong ผู้อำนวยการศูนย์ควบคุมและปกป้องโรคติดต่อของเกาหลีใต้กล่าว เพราะการประกาศใช้มาตรการของรัฐบาลจะเกิดผลสำเร็จเมื่อประชาชนส่วนใหญ่ให้ความร่วมมือด้วย

ทั้ง “ความเป็นส่วนตัว” กับ “ความมั่นคงทางสุขภาพ” จะอยู่ร่วมกันได้ไหม

อีกหนึ่งความท้าทายในสถานการณ์เช่นนี้ ที่ต้องรักษาประโยชน์สาธารณสุขส่วนรวม ขณะเดียวกันต้องปกป้องความเป็นส่วนตัวของประชาชน เพราะข้อมูลของประชาชนเกี่ยวกับเรื่องสาธารณสุขเป็นข้อมูลที่มีความอ่อนไหวสูง ทั้งในการเก็บรักษาและการเปิดเผยต่อสาธารณะ แต่ในภาวะวิกฤติเช่นนี้ ทางเลือกเพื่อรักษาชีวิตและสุขภาพของประชาชนย่อมเป็นสิ่งสำคัญ การติดตามและบันทึกประวัติของบุคคลผู้ติดเชื้อจึงมีเหตุผลสมควรต่อมาตรการควบคุมโรคของหน่วยงานสาธารณสุข

แต่นั่นก็ไม่ได้หมายความว่า เจ้าหน้าที่รัฐหรือสื่อมวลชนจำเป็นต้องเปิดเผยข้อมูลส่วนบุคคลของผู้ติดเชื้อโควิด-19 อย่างละเอียดต่อสาธารณะหรือข้อมูลที่ไม่เกี่ยวกับการควบคุมไวรัส และไม่ควรเก็บข้อมูลแบบหว่านแห แต่รัฐควรยึดหลักการจัดเก็บเฉพาะข้อมูลที่จำเป็น (Data Minimization) กำหนดกรอบระยะเวลาชัดเจน มีความโปร่งใสในขั้นตอนและวัตถุประสงค์ที่จัดเก็บต้องเป็นไปเพื่อควบคุมแพร่ระบาดของไวรัสโควิดเท่านั้น เพราะความสะเพร่าในการเก็บข้อมูลส่วนบุคคลและการเปิดเผยต่อสาธารณะอาจนำไปสู่ปัญหาการตีตราทางสังคม (social stigma) ของผู้ติดเชื้อโควิดและกลุ่มเฝ้าระวัง

โดยทาง WHO ออกแนวทางปฏิบัติเพื่อป้องกันและแก้ปัญหาการตีตราทางสังคม และ The Electronic Frontier Foundation มีคำแนะนำสำหรับผู้กำหนดนโยบายในเรื่องการเก็บข้อมูลและติดตามทางดิจิทัลไว้ใน Protecting Civil Liberties During a Public Health Crisis

เพราะหากรัฐบาลไม่สามารถสร้างความเชื่อใจและความรู้สึกปลอดภัยให้ประชาชนได้ อาจทำให้ประชาชนเลือกปกปิดอาการเจ็บป่วยเพื่อหลีกเลี่ยงการเลือกปฏิบัติ หรือถูกรังเกียจ หรือถูกตีตราทางสังคม สภาพเช่นนี้จะบีบคั้นให้ประชาชนหลีกเลี่ยงการคัดกรอง การตรวจ และการกักตัว นั่นอาจทำให้การแพร่ระบาดของไวรัสมากขึ้น ไม่ใช่น้อยลง

แม้ว่าบางสถานการณ์ สิทธิความเป็นส่วนตัวจะถูกจำกัดเพื่อผลประโยชน์สาธารณะได้ แต่ขณะเดียวกันการรักษาความเป็นส่วนตัวของบุคคลก็ยังมีความสำคัญต่อปกป้องผลประโยชน์สาธารณะเช่นกัน

การเฝ้าระวังทางสาธารณสุขและโรคระบาดเป็นเรื่องกระทำได้ เพื่อช่วยในการวางแผนควบคุมทางสาธารณสุข โดยเมื่อปี 2017 ทางองค์การ WHO ได้ออก Guidelines on Ethical Issues in Public Health Surveillance เพื่อเป็นกรอบจริยธรรมในการปฏิบัติงาน แต่ในรายงานฉบับนี้ยังไม่ได้กล่าวมากนักถึงการเก็บข้อมูลทางดิจิทัลและความกังวลเรื่องการใช้ข้อมูลดิจิทัลของประชาชนเพื่อวัตถุประสงค์อื่นนอกเหนือจากเฝ้าระวังทางสาธารณสุข

คนไทยมีอะไร: ความมั่นคง(ทางสุขภาพ) และ ความเป็นส่วนตัว

หลายครั้งปัญหาความเป็นส่วนตัวในยุคดิจิทัลคือ เรามักไม่รู้ตัวว่า กำลังถูกติดตามสอดส่อง และอุปกรณ์ดิจิทัลที่เราใช้อยู่ได้เก็บข้อมูลอะไรเกี่ยวกับเราบ้าง นี่มักนำไปสู่การเก็บข้อมูลส่วนตัวเกินจำเป็น และผู้ใช้งานเองก็ไม่ทันระวัง

ย้อนกลับมาดูกรณีประเทศไทย หลังจากรัฐบาลประกาศใช้ พ.ร.ก.ฉุกเฉินฯ เมื่อวันที่ 26 มีนาคม 2563 และเริ่มมีมาตรการบังคับใช้ออกมาเรื่อยๆ หนึ่งในนั้นคือการประกาศใช้แอปพลิเคชันติดตามตัวสำหรับกลุ่มผู้ติดเชื้อ กลุ่มเสี่ยงที่ต้องเฝ้าระวัง และกลุ่มผู้เข้าข่ายต้องสงสัยในการติดเชื้อ ผ่านการใช้ 3 แพลตฟอร์มหลักเพื่อติดตามการแพร่ระบาดของโควิด-19

โดยแอปพลิเคชันแรก คือ AOT Airports เป็นแอปฯที่มีการใช้งานมาก่อนแล้วของการท่าอากาศยาน แต่มีนำมาปรับใช้เพื่อติดตามคนที่เดินทางมาจากประเทศกลุ่มเสี่ยงว่าอยู่ในที่พักอาศัยและกักตัว 14 วันตามข้อตกลงหรือไม่ โดยผู้ใช้ต้องกรอกข้อมูลส่วนตัว และแอปฯนี้สามารถติดตามประวัติการเดินทางของผู้ใช้ได้

ต่อมาเป็นแพลตฟอร์มที่ชื่อว่า covid19.ddc.moph.go.th เพื่อรายงานสถานการณ์ผู้ติดเชื้อและข้อมูลสำคัญที่ออกโดยทางการ มีการทำแผนผังการพบผู้ติดเชื้อและตรวจสอบพื้นที่เตือนระวังในประเทศไทย มีข้อมูลทั้งภาษาไทยและภาษาอังกฤษ

สุดท้ายคือการพัฒนาแชตบอต (Chatbot) ที่ชื่อว่า “สบายดีบอต” @sabaideebot ใน LINE Official สำหรับกลุ่มเสี่ยงและเข้าข่ายติดเชื้อเมื่อได้รับผลตรวจและเก็บบันทึกอาการสุขภาพ โดยเชื่อมต่อกับแพลตฟอร์มรายงานสถานการณ์

@sabaideebot LINE Official Chatbot

ทั้งสามแพลตฟอร์มจำเป็นต้องจัดเก็บข้อมูลส่วนตัวของผู้ใช้มหาศาลในช่วงเวลานี้ และต้องระมัดระวังในการดูแลความเสี่ยงเรื่องความเป็นส่วนตัว เช่น เมื่อดูรายละเอียดของแอปฯ AOT Airports อาจเสี่ยงที่จะเก็บข้อมูลส่วนบุคคลของผู้ใช้งานมากเกินความจำเป็น ข้อมูลบางประเภทที่ให้กรอกไม่เกี่ยวกับการติดตามโรคระบาด และมีความกังวลว่าข้อมูลส่วนตัวของผู้ใช้งานจะถูกนำไปแชร์กับบุคคลที่สาม เพราะผู้ใช้งานต้องให้ยอมรับเงื่อนไขใน “ข้อกำหนดการใช้งานและนโยบายความเป็นส่วนตัว” ของแอปฯ

หลังจากนี้เรายังคงต้องคอยติดตามว่า รัฐบาลจะเพิ่มการมาตรการบังคับใช้แอปพลิเคชันหรือเทคโนโลยีติดตามตัวภายใต้ พ.ร.ก.ฉุกเฉินฯ อย่างไร และช่วยกันตรวจสอบว่าเราถูกเก็บข้อมูลส่วนตัวใดบ้างที่ไม่เกี่ยวข้องกับผลประโยชน์ทางสาธารณสุขของประชาชน ใครบ้างที่เข้าถึงข้อมูลเหล่านั้น รวมไปถึงขอบเขตอำนาจและความสัมพันธ์ของรัฐกับภาคเอกชนหลังจากนี้

โลกหลังวิกฤติโควิด-19 จะเป็นเช่นไร

แม้ว่าบางสถานการณ์ สิทธิความเป็นส่วนตัวจะถูกจำกัดเพื่อผลประโยชน์สาธารณะได้ แต่ขณะเดียวกันการรักษาความเป็นส่วนตัวของบุคคลก็ยังมีความสำคัญต่อปกป้องผลประโยชน์สาธารณะเช่นกัน

ถ้าเมื่อเราผ่านพ้นวิกฤติโรคระบาดไปแล้ว เรายังจำเป็นต้องกลับมาทบทวนมาตรการทางเทคโนโลยีสอดส่องและการเก็บข้อมูลส่วนบุคคลที่เคยใช้ภายใต้สถานการณ์ฉุกเฉิน ทั้งของประเทศไทยและประเทศอื่นๆ เพราะบทเรียนจากเหตุการณ์ก่อการร้าย 9/11 ในสหรัฐฯได้ยกระดับมาตรการการสอดแนมโดยรัฐไปอย่างถาวร

ยูวาล โนอา ฮารารี ผู้เขียนหนังสือ “Sapiens: A Brief History of Humankind” ตีพิมพ์บทความที่ชื่อว่า “The world after coronavirus” ชวนให้คิดต่อว่าโลกหลังผ่านพ้นวิกฤติโควิด-19 ไปแล้วจะเป็นอย่างไร แนวทางการใช้เทคโนโลยีเพื่อสอดส่องสุขภาพของประชาชนในช่วงวิกฤติโควิด-19 หากไม่ระวังให้ดีอาจทำให้รัฐสร้างมาตรฐานความชอบธรรมต่อระบอบสอดส่องอย่างเบ็ดเสร็จของรัฐในระยะยาว

แม้จะเห็นด้วยถึงความจำเป็นของการใช้มาตรการเร่งด่วนเพื่อตอบสนองวิกฤติสุขภาพที่กำลังเกิดขึ้น แต่มาตรการฉุกเฉินต้องใช้อย่าง “มีขอบเขต” และอยู่แค่ “ชั่วคราว” ต้องไม่มีอยู่หลังผ่านพ้นวิกฤติโรคระบาดแล้ว

การบังคับใช้มาตรการที่มีบทลงโทษที่รุนแรงไม่ได้เป็นทางเดียวที่ทำให้ประชาชนปฏิบัติตามแนวทางของรัฐ แต่การสร้างความเชื่อมั่นในมาตรการที่โปร่งใส และสื่อสารข้อเท็จจริงกับประชาชนอย่างชัดเจนต่างหากที่ถือเป็นกุญแจสำคัญในการจัดการภาวะวิกฤติเช่นนี้

ในสถานการณ์วิกฤติทางสาธารณสุขที่ยังไม่ผ่านพ้นไป เราอาจต้องเปลี่ยนจากคำถามที่ว่า ถ้าต้องเลือกระหว่าง “ความมั่นคงทางสุขภาพ” หรือ “ความเป็นส่วนตัว” มาเป็นคำถามว่า เราจะทำอย่างไรให้ทั้งสองความสำคัญอยู่ร่วมกันได้

เพราะไม่เช่นนั้น ความเป็นส่วนตัว สิทธิ เสรีภาพของเรา ทั้งออนไลน์ และออฟไลน์ จะไม่คืนกลับมา แม้สถานการณ์จะคืนสู่ภาวะปกติแล้วก็ตาม

About the Author

Darika Bamrungchok is a Digital Rights Manager (Mekong) at EngageMedia, based in Bangkok. She leads a digital rights and digital safety program in Thailand, and is interested in technology and human rights under modern authoritarian regimes.

ดาริกา บำรุงโชค ปัจจุบันทำงานในตำแหน่งผู้จัดการโครงการสิทธิดิจิทัลขององค์กร EngageMedia ประจำประเทศไทย เธอดูแลโครงการเกี่ยวกับสิทธิดิจิทัลและความปลอดภัยทางดิจิทัลในประเทศไทยและกลุ่มประเทศลุ่มแม่น้ำโขง มีความสนใจเป็นพิเศษในประเด็นเกี่ยวกับเทคโนโลยีกับสิทธิมนุษยชน

The post How Health Security and Individual Privacy Can Go Hand in Hand appeared first on Coconet.

]]>
https://coconet.social/2020/covid-health-security-privacy-thailand/feed/ 2
Artificial Intelligence and Human Rights in Southeast Asia: An Overview https://coconet.social/2020/ai-hr-sea-overview/ https://coconet.social/2020/ai-hr-sea-overview/#respond Wed, 25 Mar 2020 12:23:13 +0000 https://coconet.social/?p=930 EngageMedia worked with Dr. Jun-E Tan, an independent researcher and digital rights expert, to produce a blog post, a three-part series on Artificial Intelligence (AI) and human rights in Southeast Asia, and a video wrapping up the discourse for the whole engagement.

The post Artificial Intelligence and Human Rights in Southeast Asia: An Overview appeared first on Coconet.

]]>

The context of how artificial intelligence (AI) affects our rights as digital natives is worth unpacking, especially during political and public health crises, where online communication is a lifeline for many, and citizens are possibly being subjected to government surveillance and manipulation.

This is especially important when the crisis is of life-and-death importance, like the ongoing Covid-19 pandemic.

With this, EngageMedia worked with Dr. Jun-E Tan, an independent researcher and digital rights expert based in Kuala Lumpur, to unpack how AI plays out for the good — through improving public services and quality of life — and how it can be used by bad actors: to attack political, economic, and cultural rights of citizens, sometimes without them even knowing.

The collaboration resulted in several outputs: a blog post about how AI is tackled during Coconet II: Southeast Asia Digital Rights Camp, a three-part series on AI and human rights in Southeast Asia, and a video wrapping up the discourse for the whole engagement.

Image by Computerizer from Pixabay
Image by Computerizer from Pixabay

AI and Human Rights Video

Produced by EngageMedia, the video athe the top provides an overarching feature of issues on AI, human rights, and its Southeast Asia context, summarizing the issues raised by this series on AI and Human Rights.

Featuring interviews from Dr. Jun-E Tan and Red Tani of EngageMedia, it was shown during the Myanmar Digital Rights Forum on Feb. 28 and 29, 2020, an event attended by more than 350 participants from government, business, and civil society. You can read our blog about the forum here.

The video highlights how AI issues relate to the context of Southeast Asia, particularly recent political movements against authoritarian regimes, as well as other social issues that are susceptible to online hijacking, through manipulation of online narratives and surveillance of dissenters.

AI and Human Rights in Coconet II

Prior to the production of the AI and Human Rights video, discussions about AI and its human rights implications actually started at Coconet II.

After the weeklong camp, Dr. Tan wrote about the learnings from the event, encapsulating how she started the camp with the assessment that AI is a subject of concern for digital rights activists, but is something that they want to learn on a much deeper level, to how Coconet II provided focus on AI and human rights.

“The sessions were very helpful for me, as a participant and a session organiser, to formulate and articulate the problems associated with machine learning from a digital rights perspective. They were also useful to form an initial community concerned about AI, continued through the AI channel in the Coconet Mattermost platform, which is one of its biggest channels with 48 members so far,” she said.

She also concluded that the conversations on AI and digital rights needed to extend beyond the digital rights camp, as the topic “will only increase in importance with time, as more people get connected digitally and more governments adopt these technologies.”

The importance of AI will only increase with time, as more people get connected digitally and more governments adopt these technologies.

- Dr. June-E Tan

This also served as the prelude for the three-part article series on AI and human rights in Southeast Asia, which she briefly mentioned in the blog as well.

You can read the full blog here.

Image by PIRO4D from Pixabay
Image by PIRO4D from Pixabay

3-part series about AI and Human Rights in Southeast Asia

Next up in the collaboration is the three-part article series on AI and its implications to civil, social, economic, cultural, and political rights in Southeast Asia.

This is a series of articles on the human rights implications of AI in the context of Southeast Asia

- Dr. Jun-E Tan

“This is a series of articles on the human rights implications of AI in the context of our region, targeted at raising awareness and engagement of civil society actors who work with marginalised communities, on rights advocacy, and on developmental issues, such as public health, poverty, and environmental causes,” Dr. Tan explained.

The series started its release towards the end of 2019, with an overview of the basic concepts and terms related to AI, as well as an introduction to the human rights context in AI and the Southeast Asia landscape.

It unpacked topics like digital authoritarianism through AI, underrepresentation in AI datasets, socioeconomic impacts of AI, and participation in AI governance through careful curation of recent related studies and publications.

You can check the first part of the series published in the Coconet social website here. The article was picked up for syndication by a Philippine news website and network sharing by Coconet members.

The second part of the series then zoomed in on the impact of AI in the economic, social, and cultural rights (ESCR) of citizens from Southeast Asia.

It first presented the possible benefits of AI in the development sector. “AI, when used strategically and appropriately, can provide immense developmental benefits. Economic growth is a much-touted benefit, but possibilities of AI to improve lives extend much further,” wrote Dr. Tan. This includes benefits in education, healthcare, traffic, and food security.

The article then elaborates the possible abuse of AI to interfere with economic, social, and cultural rights, especially on possibly worsening and even optimising inequality through undue bias in AI data and the system itself.

Read the second article in full via the Coconet social website here, and feel free to check out the republished article via Daily Guardian Philippines as well.

June-E Tan's article at the Daily Guardian Philippines.
June-E Tan's article at the Daily Guardian Philippines.

And lastly, the third article of the series focused on AI as a weapon against civil and political rights, which takes “a closer look on what can happen when AI is weaponised and used against civil and political rights (CPR) such as the right to life and self-determination, as well as individual freedoms of expression, religion, association, assembly, and so on.”

It tackled the use of AI for government surveillance, microtargeting to change voter behaviour, and the use of AI-generated content to fuel disinformation campaigns.

The series then focused on a note on civil society’s role on the AI and human rights issues presented: “AI can be, and has been, weaponised to achieve ends that are incompatible with civil and political rights. At the very least, the civil society within the region should invest energy and resources into following technological trends and new applications of AI so that it will not be taken by surprise by innovations from malicious actors. As is the nature of machine learning and AI, it is expected that the efficacy of the technologies will only get better.”

“Civil society and human rights defenders will need to participate in the discussions of AI governance and push for tech companies to be more accountable towards the possible weaponisation of the technologies that they have created, in order to safeguard human rights globally,” Dr. Tan wrote to conclude the series.

Civil society and human rights defenders will need to participate in the discussions of AI governance and push for tech companies to be more accountable

- Dr. Jun-E Tan

And lastly, the third article of the series focused on AI as a weapon against civil and political rights, which takes “a closer look on what can happen when AI is weaponised and used against civil and political rights (CPR) such as the right to life and self-determination, as well as individual freedoms of expression, religion, association, assembly, and so on.”

It tackled the use of AI for government surveillance, microtargeting to change voter behaviour, and the use of AI-generated content to fuel disinformation campaigns.

Daily Guardian Philippines syndicated the last part of the series and published it during their Mar. 6, 2020 print edition, and republished it online as well. You may also check the version published on the Coconet social website here.

The engagement was able to open the possibility for more mainstream discussion of a seemingly technical issue through presenting its implications

Overall, the collaboration on AI and its Southeast Asia Human Rights implications contributed to bridging the knowledge gap on the issue not just among digital rights activists who needed it for their advocacy — distribution in mainstream news sites and social platforms more broadly increased awareness among civil society.

The engagement was able to open the possibility for more mainstream discussion of a seemingly technical issue through presenting its implications, especially to those who should have guaranteed protections under their laws.

Although it’s relevant for everyone, it posed a challenge especially for civil society — acknowledge AI and its human rights implications as valid and actionable issues, educate yourselves and others about these, and do informed advocacy work in response to the current challenges and threats.

More AI resources in development

Through collaboration with the Coconet community, Dr. Tan has complied two helpful resources: a mapping of AI issues across the region, and a list of relevant resources for those who want to learn more about such issues. We will update this post once these pages are ready.

About the Author

Vino Lucero is a Project and Communications Officer at EngageMedia. He is a journalist based in Manila.

The post Artificial Intelligence and Human Rights in Southeast Asia: An Overview appeared first on Coconet.

]]>
https://coconet.social/2020/ai-hr-sea-overview/feed/ 0
In Myanmar, Digital Rights Are Integral to Policy and Advocacy https://coconet.social/2020/myanmar-digital-rights-forum-2020/ https://coconet.social/2020/myanmar-digital-rights-forum-2020/#comments Thu, 05 Mar 2020 06:00:09 +0000 https://coconet.social/?p=882 Members of the Coconet community took part in the fourth Myanmar Digital Rights Forum (MDRF), which focused on the importance of digital rights in the face of disinformation, internet shutdowns, and emerging technologies.

The post In Myanmar, Digital Rights Are Integral to Policy and Advocacy appeared first on Coconet.

]]>
Ambassador of Sweden to Thailand, Myanmar, and Laos PDR Staffan Herrstrom shares in his keynote speech that he is excited to listen and learn more about digital rights in the region at the fourth Myanmar Digital Rights Forum in Yangon.
Ambassador of Sweden to Thailand, Myanmar, and Laos PDR Staffan Herrstrom shares in his keynote speech that he is excited to listen and learn more about digital rights in the region at the fourth Myanmar Digital Rights Forum in Yangon.

Members of the Coconet community took part in the fourth Myanmar Digital Rights Forum (MDRF), which focused on the importance of digital rights in the face of disinformation, internet shutdowns, and emerging technologies.

Held on Feb. 28 and 29, 2020, at the Rose Garden Hotel in Yangon, the MDRF was organised by Phandeeyar, Myanmar ICT for Democracy Organisation (a partner during Coconet II), Myanmar Centre for Responsible Business (MCRB), and Free Expression Myanmar. This year’s conference hosted over 350 attendees and speakers from government, businesses, and civil society, making it the largest digital rights forum in Southeast Asia.

We cannot afford to assume that digital rights will evolve at the same rate that the internet has.

- Jes Kaliebe Petersen, CEO of Phandeeyar

Two important events in Myanmar framed many of the discussions over the two-day forum: the upcoming elections in late 2020 and the internet shutdown in Rakhine and Chin states, which is going on its eighth month and has led to charges against nine students who organised protests against it.

Sessions that directly tackled these issues attracted the most attendees. Facebook representatives shared what it was doing to curb disinformation in the region.

Digital rights activist and Coconet I participant Daw Ei Myat Noe Khin, in her keynote speech, reiterated the calls to end the government shutdown.

On the first day of MDRF, the Coconet community also joined a worldwide social media campaign calling for the lifting of the internet shutdown.

Coconut I participant Htaike Htaike Aung from the Myanmar ICT for Development Organisation is among the "coconutz" who were at the digital rights forum.
Coconut I participant Htaike Htaike Aung from the Myanmar ICT for Development Organisation is among the "coconutz" who were at the digital rights forum.

Other important issues surrounding digital rights that were discussed during the event were:

  1. Myanmar’s digital culture
  2. Threats to freedom of expression online
  3. Claiming ownership over your own data
  4. National security vs right to information
  5. Surveillance and the smart city
  6. Creating a digitally accessible Myanmar
  7. Artificial intelligence (AI)
  8. Deepfakes
  9. Women’s rights online
  10. Data protection and cybersecurity
  11. Digital content restrictions in Myanmar
  12. Bridging the legal gap in digital rights

Red Tani of EngageMedia facilitates discussions on the benefits and consequences of using AI to further advocacies such as mental health and women's rights.
Red Tani of EngageMedia facilitates discussions on the benefits and consequences of using AI to further advocacies such as mental health and women's rights.

Members of the Coconet community who attended either or both Coconet camps also served as speakers and panellists at the conference, sharing personal experiences on topics related to digital rights. Wu Min Hsuan shared examples from Taiwan on the digital risks during elections. Gaya Khandhadai of the Association for Progressive Communications shared how she was targeted online based on her gender and religion. Witness.org, also a Coconet partner, talked about deepfakes and how this affects Southeast Asia.

EngageMedia’s Darika Bamrungchok and Red Tani were also among the forum’s speakers and panellists. On Day One, Red facilitated an open session titled, “Artificial Intelligence and Digital Rights in Southeast Asia”. The session began with a short video summarising the research of Dr. Jun-E Tan on AI and its uses, implications, and consequences in the region. It ended with attendees breaking out into smaller groups to identify how AI can both empower but detract from digital rights and other advocacies. There was also a consensus among participants that whether AI is good in the present and for the future, we need to first understand what exactly AI is in the first place.

Day Two had Darika as a panellist in the session titled, “Staying Safe: What does Myanmar need to do to put data protection and cybersecurity at the core of the digital revolution?”. Here she talked about Thailand’s Personal Data Protection Act and how its implementation can relate to and affect the Myanmar context. She was joined by other panellists from Microsoft, Privacy International, and MCRB, as well as the Ambassador to the Kingdom of Netherlands in Myanmar.

Regulation is not always the solution. When it comes to disinformation, criminalising speech won’t address the issue. We need a rights-respecting way forward.

- Daw Ei Myat Noe Khin, digital rights activist and Coconet I participant

Darika Bamrungchok of EngageMedia likens Thailand's Personal Data Protection Bill to the General Data Protection Regulation (GDPR) in the European Union.
Darika Bamrungchok of EngageMedia likens Thailand's Personal Data Protection Bill to the General Data Protection Regulation (GDPR) in the European Union.

Find out more about what transpired at MDRF by following the hashtags #digitalrightsMM and #MDRF2020.

About the Author

Sara Pacia is the Communications and Engagement Coordinator of EngageMedia. A journalist by training and multimedia storyteller at heart, she is passionate about utilising and appropriating today’s digital technologies for the empowerment of the public and the improvement of media and data literacy.

The post In Myanmar, Digital Rights Are Integral to Policy and Advocacy appeared first on Coconet.

]]>
https://coconet.social/2020/myanmar-digital-rights-forum-2020/feed/ 1
AI as a weapon against civil and political rights https://coconet.social/2020/ai-weapon-civil-political-rights/ https://coconet.social/2020/ai-weapon-civil-political-rights/#respond Wed, 04 Mar 2020 08:28:50 +0000 https://coconet.social/?p=877 We are going to take a closer look on what can happen when AI is weaponised and used against civil and political rights (CPR) such as the right to life and self-determination, as well as individual freedoms of expression, religion, association, assembly, and so on.

The post AI as a weapon against civil and political rights appeared first on Coconet.

]]>

Read this Article in Thai / อ่านบทความนี้ใน ภาษาไทย

Translated into Thai by Teerada Na Jatturas
(To read the Thai version, click the flag icon in the upper right-hand corner of your screen.)

This is the third of a series of articles on the human rights implications of artificial intelligence (AI) in the context of Southeast Asia. In the last article, we had discussed the implications of AI on economic, social, and cultural rights, driving home the point that AI does yield developmental benefits, if it is implemented properly. The human rights concerns were more on AI safety and unintended consequences.

In this article, we are going to take a closer look on what can happen when AI is weaponised and used against civil and political rights (CPR) such as the right to life and self-determination, as well as individual freedoms of expression, religion, association, assembly, and so on.

Within the space of this article, it is impossible to cover the entire extent to which AI can be used against CPR, so we will only address three imminent threats: mass surveillance by governments, microtargeting that can undermine elections, and AI-generated disinformation. For those who are interested to dig deeper, a report by a collection of academic institutions on the malicious use of AI endangering digital, physical, and political security makes for a riveting read.

AI can be weaponised and used against civil and political rights

AI used for Government Surveillance

Privacy is a fundamental human right, and the erosion of privacy impacts on other civil freedoms such as free expression, assembly, and association. In Southeast Asia, where most countries tilt towards the authoritarian side of the democratic spectrum, governments have shown that they can go to great lengths to quench political dissent, such as wielding draconian laws or using extralegal measures to intimidate dissenters.

With machine learning that can sift through mountains of data collected inexpensively and make inferences that were previously invisible, surveillance of the masses becomes cheaper and more effective, making it easier for the powerful to stay in power. 

The table below is adapted from the AI Global Surveillance Index (AIGS 2019), extracting the seven Southeast Asian countries covered (with no data on Brunei, Vietnam, Cambodia, and Timor Leste). It shows that most countries within the region use two or more types of surveillance technologies in the form of smart/safe city implementations, facial recognition, and smart policing; and all of these countries use technologies imported from China, and to a lesser extent from the US as well.

Table adapted from the AI Global Surveillance Index (AIGS 2019)

To provide a wider context, at least 75 out of 176 countries covered in the AIGS 2019 are actively using AI for surveillance purposes, including many liberal democracies. The index does not differentiate between legitimate and unlawful use of AI surveillance. Given the context of Southeast Asia, civil society in the region might want to err on the side of caution.

A CSIS report points out that Huawei’s “Safe City” solutions are popular with non-liberal countries, and sounds the concern that China may be “exporting authoritarianism”. China itself has used facial recognition (developed by Chinese AI companies Yitu, Megvii, SenseTime, and CloudWalk) to profile and track the Muslim Uighur community—it is also known that close to a million Uighurs have been placed in totalitarian “re-education camps”, illustrating the chilling possibilities of human rights violations connected to mass surveillance.

13 Asian countries have a social media surveillance programme

Other forms of government surveillance include social media surveillance and using AI to collect and process personal data and metadata from social media platforms. The Freedom on the Net (FOTN) Report (2019) states that 13 out of the 15 Asian countries that it covers have a social media surveillance programme in use or under development, but does not specify which. The odds are high for these eight Southeast Asian countries covered in the report: Philippines, Malaysia, Singapore, Indonesia, Cambodia, Myanmar, Thailand, and Vietnam.

In particular, examples of Vietnam and the Philippines were highlighted in the FOTN report. In 2018, Vietnam “announced a new national surveillance unit equipped with technology to analyse, evaluate, and categorise millions of social media posts”; and in the same year, Philippine officials were trained by the US Army on developing a new social media unit, which reportedly would be used to counter disinformation by terrorist organisations.

Digital surveillance by the government can also cast a wider net outside social media platforms. In 2018, the Malaysia Internet Crime Against Children (Micac) unit of the Malaysian police demonstrated to a local daily its surveilling capabilities of locating pornography users in real-time, and that it built a “data library” of these individuals—a gross invasion of privacy, as pointed out by a statement by some ASEAN CSOs

Image by Bark 003 via Cool SILH. Public Domain .

Microtargeting to change voter behaviour

In Southeast Asia, civil society is more vigilant about government surveillance than corporate surveillance. However, the effects of corporate surveillance, especially by big tech companies like Facebook and Google, maybe equally sinister or even more far-reaching when their AI technologies, combined with an unimaginable amount of data, are up for hire to predict and change user behaviour for their advertisers. This is particularly problematic when the advertisers are digital campaigners for political groups looking to change public opinions or voter behaviour, affecting the electoral rights of individuals.

Five years ago, researchers had already found that based on Facebook likes, machines could know you better than anyone else (300 likes were all it needed to know you more than your spouse did, and only 10 likes to beat a co-worker). Since then, there have been scandals such as that of Cambridge Analytica, which illegally obtained the data of tens of millions of Facebook users, which they were able to use to create psychographic profiles from, so that they could micro-target voters with different messaging to swing the votes of the 2016 US presidential election. Similar tactics allegedly influenced the outcomes of the Brexit referendum.

With 2 billion users, Facebook is able to train its machine learning systems to predict user behaviours

Cambridge Analytica has since closed down, but the scandal had brought the business model of microtargeted political advertising into public scrutiny. Political commentators have pointed out that Facebook does pretty much the same thing, only in a bigger and more ambitious way. With the data of 2 billion users at their disposal, Facebook is able to train its machine learning systems to predict things like when an individual user is about to switch brand loyalty and rent this user intelligence to whoever that pays. It does not discern between regular advertising and political advertising. It is worth mentioning that Twitter has banned political advertising, and Google has disabled microtargeting for political ads.

86% of Southeast Asia’s Internet users use Facebook. Within the region, concerns have already arisen regarding microtargeting to influence elections. A report that tracks digital disinformation in the 2019 Philippine midterm election points out that Facebook Boosts (Facebook’s advertising mechanism) are essential for local campaigns because of their ability to reach specific geographical locations. Besides advertising the Facebook pages of official candidates, Facebook Boosts were also used to promote negative content about political opponents. In Indonesia, ahead of the 2019 general elections, experts warned of voter behavioural targeting and voter microtargeting strategies which might exploit personal data of Indonesian voters to change election outcomes.

Image via SVG Silh. Public Domain.
Image via SVG Silh. Public Domain.

AI-generated content fuels disinformation campaigns

In the digital era, rumour-mongering becomes much more effective because of the networked nature of our communication. As a result, disinformation or fake news has become a worldwide problem, and in Southeast Asia, the gravest example of possible consequences is the ethnic cleansing of Rohingyas in Myanmar, reportedly fueled by the spread of disinformation and hate speech on social media.

The disinformation economy has flourished within the region. In Indonesia, fake news factories are used to churn out content to attack political opponents and to support their clients. PR companies in the Philippines cultivate online communities and surreptitiously insert disinformation and political messaging with the help of micro and nano influencers. As nefarious actors establish structures to create and profit from disinformation, AI will make content generation much easier and more sophisticated for them.

One of the scariest possibilities of AI-generated content is the so-called “deepfake”, also known as “synthetic media”, which are manipulated videos or sound files which look/sound highly realistic. Deepfakes are already a reality and it is a matter of time (in the matter of months) before they are not discernible from real footage and cheap enough to be produced by any novice. At the moment, they are mainly being used to produce fake pornography of celebrities, but there is a dizzying array of possibilities of how it could be used for creating disinformation. In the region, there is at least one case in Malaysia of a politician claiming his sex video to be a politically motivated deepfake.

The video above gives a good introduction of what deepfakes are, some examples, and why we should be concerned. WITNESS has a good resource pool of articles and videos of deepfakes for those who are interested to dig further.

Another example of what AI is able to do in terms of generating realistic content can be found in the interactive component of this New York Times article —with a click of a button, one can generate commentary on any topic, with any political slant. As can be imagined, the cost of maintaining a cyber army for astroturfing or trolling goes down drastically if machines are used to generate messages that look like they have been written by humans. In a separate report that warns of AI-generated “horrifyingly plausible fake news”, a system called GROVER can generate a fake news article based on only a headline, which can even be customised to mimic styles of major news outlets such as The Washington Post or The New York Times.

The post-truth era has many faces. Lastly, you can have a bit of fun and check out ThisPersonDoesNotExist.com or WhichFaceIsReal.com to see the level of realism in computer-generated photos of human faces, which makes it easy to generate photos for fake social media profiles.

In Conclusion

As has been demonstrated by this article, AI can be, and has been, weaponised to achieve ends that are incompatible with civil and political rights. At the very least, the civil society within the region should invest energy and resources into following technological trends and new applications of AI so that it will not be taken by surprise by innovations from malicious actors. As is the nature of machine learning and AI, it is expected that the efficacy of the technologies will only get better. Civil society and human rights defenders will need to participate in the discussions of AI governance and push for tech companies to be more accountable towards the possible weaponisation of the technologies that they have created, in order to safeguard human rights globally.

About the Author

Dr. Jun-E Tan is an independent researcher based in Kuala Lumpur. Her research and advocacy interests are broadly anchored in the areas of digital communication, human rights, and sustainable development. Jun-E’s newest academic paper, “Digital Rights in Southeast Asia: Conceptual Framework and Movement Building” was published in December 2019 by SHAPE-SEA in an open access book titled “Exploring the Nexus Between Technologies and Human Rights: Opportunities and Challenges in Southeast Asia”. She blogs sporadically here.

To read more about this series on artificial intelligence in Southeast Asia, you can check out the first part here and the second part here.

The post AI as a weapon against civil and political rights appeared first on Coconet.

]]>
https://coconet.social/2020/ai-weapon-civil-political-rights/feed/ 0
Use of Memes In Activism https://coconet.social/2020/memes-activism/ https://coconet.social/2020/memes-activism/#respond Mon, 02 Mar 2020 09:21:17 +0000 https://coconet.social/?p=851 By utilising humour and the shareability of memes, it becomes clear that memes can be used for the diffusion of information across the Internet because of their speed and capillarity on social media.

The post Use of Memes In Activism appeared first on Coconet.

]]>

Memes have become a fundamental aspect of Internet culture and the native language of social media. With their ability to easily convey a message and to be easily shared, they have been adopted by many movements, especially in the Asia-Pacific. Meme creation is driven by remixing and recontextualizing popular images to create interesting ways of distributing messages. Memes within the context of movements tend to be inherently political. Using political humour on the Internet can contribute to the creation and consolidation of a network of shared meanings which then reframes content from mainstream culture. Humour is a means for politics to be explored and understood.

Memes incorporate elements of mainstream popular culture

Memes incorporate elements of mainstream popular culture, which encourages the ordinary citizen to participate in movements. The Internet provides a space for image and language play that is absurd and full of juxtaposition and insider jokes. By utilizing humour and the shareability of memes, it becomes clear that memes can be used for the diffusion of information across the Internet because of their speed and capillarity on social media.

Memes are a cultural product based on social relations, memories, historic, geographic and economic references as well as specific conjectural aspects. Memes that Internet users post, share, and like best are what they find interesting and humorous, reflect their impression of a topic and affect or sensitize them to a topic. Memes can allow non-elite Internet users to yield influence and make their voices heard. Through this, memes can become a tool for grassroots action in human rights movements as memes can be utilized to spread messages quickly.

To demonstrate this, eight political memes from Southeast Asia will be reflected on.

Duterte meme

The meme above utilizes the popular Drake meme format known is Drakeposting. This meme typically has two images of the rapper Drake with one representing that he doesn’t approve of something and the other representing that he does approve. The Drakeposting meme is utilized here to criticize Philippine President Rodrigo Roa Duterte.

meme2

This meme has remixed the Drakeposting format by removing the second image of Drake. This was done to convey a particular message. The absence of the second image of Drake is meant to imply that when one was critical of Indonesian brutal military dictator General Suharto, who was in power from 1967 until 1998, they would disappear in some manner.

Cat Drake meme

This is an example of the Cat Drake meme format which has become a popular remixing of the original Drake meme. This meme is criticizing Indonesians for the ongoing conflict in West Papua where the people of West Papua are fighting for their independence from Indonesia. This meme is conveying that though Indonesians are against colonization such as the Dutch colonization of Indonesia, they reject the idea of the Indonesian colonization of West Papua.

(English: "Respect your elders, even when they're your competitor")
(English: "Respect your elders, even when they're your competitor")

This meme takes on a popular format of utilizing a topical cultural image with an overlaid humorous text to make a political statement. This image is of former vice presidential candidate Sandiaga Uno with the current vice president, Ma’ruf Amin, before a national debate showing him paying respect to his elder as culturally dictated by kissing his hand even though he is his competitor.

English: Will your shoes fit for the night shift? - Bong Go
English: Will your shoes fit for the night shift? - Bong Go

This meme also uses a culturally relevant image overlaid with text to make a political point. This image, however, has been photoshopped to add different shoes and a communist bandana utilizing the ‘ugly aesthetic’ often used in memes. This meme is commenting on Bong Go – a Filipino politician – stating that students go up into the mountains in the night to work for the communist party.

meme6

This meme is less focused on criticizing Duterte and more focused on making fun of him. Using humour this way is a common aspect of political meme culture. This meme was posted in a meme Facebook group soon after a report was released that Duterte had fallen from his motorcycle. Though some might see this meme and think it is supporting Duterte, this meme, in fact, does nothing of the sort.  A reading of or direct knowledge of Psalm 109:8 – “May his days be few; may another take his place of leadership” – is required to fully understand the humour within the multiple layers of context used in this meme.

Prayut: I am the senate

This is a remixed version of the popular I Am The Senate meme. This meme was remixed to criticize Thailand Prime Minister Prayut Chan-o-cha. Prayut is not only the Prime Minister but he also serves as Thailand’s Defence Minister and head of the Royal Thai Police. Therefore, by repurposing this meme, it is able to criticize Prayut’s overarching control of the government that goes against the will of the people.

(English: us citizens)
(English: us citizens)

This meme comments on the Thai election and a secret parliament vote that resulted in the former Prime Minister Abhisit Vejjajiva quitting parliament after his party – the Democrat Party – voted to join the junta-aligned coalition backing Prayuth Chan-o-cha’s bid to become Prime Minister. The Distracted Boyfriend meme was remixed with photoshopped images to represent the feelings of the citizens. Here, the distracted boyfriend has the Democrat party logo photoshopped on him. He is looking back at another girl with the photoshopped face of Prayuth Chan-o-cha while his upset girlfriend has text over her that reads ‘us citizens’. Through this, a message of feeling ignored by the Democrat party is conveyed.

About the Author

Carmen Ferri is the Asia policy and countering hate speech intern at the Association For Progressive Communications (APC) and is currently based in the Philippines. Carmen holds an MA in New Media and Digital Culture from the University of Amsterdam and an undergraduate in Liberal Arts: Cultural Studies and Sociology from Maastricht University. Her areas of interest include subculture construction in online spaces, online community building, issue mapping, and online political movements.

The post Use of Memes In Activism appeared first on Coconet.

]]>
https://coconet.social/2020/memes-activism/feed/ 0
Can’t live with it, can’t live without it? AI impacts on economic, social, and cultural rights https://coconet.social/2020/ai-impacts-economic-social-cultural-rights/ https://coconet.social/2020/ai-impacts-economic-social-cultural-rights/#comments Wed, 12 Feb 2020 10:20:51 +0000 https://coconet.social/?p=815 This is the second in a series of articles on the human rights implications of artificial intelligence (AI) in the context of Southeast Asia, targeted at raising awareness and engagement of civil society on the topic.

The post Can’t live with it, can’t live without it? AI impacts on economic, social, and cultural rights appeared first on Coconet.

]]>

Read this Article in Thai / อ่านบทความนี้ใน ภาษาไทย

Translated into Thai by Teerada Na Jatturas
(To read the Thai version, click the flag icon in the upper right-hand corner of your screen.)

This is the second in a series of articles on the human rights implications of artificial intelligence (AI) in the context of Southeast Asia, targeted at raising awareness and engagement of civil society on the topic.

In the previous article, we looked at the definitions of AI and machine learning, and discussed some considerations of their applications in the Southeast Asian context. In this article and the next, we will continue the discussion on potential human rights impacts, from the angles of 1) economic, social, and cultural rights (ESCR), and 2) civil and political rights (CPR). To provide adequate space to unpack the ideas, this article will focus on the first group of rights.

What are economic, social, and cultural rights (ESCR)?

Drawing from the International Covenant of Economic, Social and Cultural Rights (ICESCR), these rights include the rights to health, education, social security, proper labour conditions, quality of life, and participation in cultural life and creative activities. These rights are often considered as positive rights, which require action to fulfil (such as providing opportunities for decent work), as opposed to civil and political rights which require inaction (such as not restricting freedom of expression).

It is important to note that ESCR implications of AI is not a binary “good” or “bad”. Even in the same application, outcomes may differ for different people—some may be impacted positively and some negatively. For example, relying on AI for deciding on credit trustworthiness based on a large pool of data points may benefit the poor with a thin credit profile, as they buy fewer big items and cannot prove that they are trustworthy based on their credit history. However, using data points broader than credit history may discriminate against others based on non-related data points, sometimes in arbitrary ways—an example was given on a certain AI system giving lower points if an applicant typed in all-caps, which is apparently correlated with a higher risk of default.

To structure our discussion, we can look at the implications of AI on ESCR from two angles: 1) the cost of not implementing AI for development, and 2) the cost of implementing it badly.

"fikiran" is licensed under CC0 1.0

Developmental benefits of AI

AI, when used strategically and appropriately, can provide immense developmental benefits. Economic growth is a much-touted benefit, but possibilities of AI to improve lives extend much further. Here are some examples of what the technologies can already achieve in Southeast Asia:

  • Healthcare: In Singapore, a local startup Kronikare worked with AI Singapore to develop a system to capture, analyse, and diagnose chronic wound conditions. This system was then scaled up and is currently deployed in some hospitals and nursing homes in Singapore.
  • Traffic: Malaysia City Brain, a collaboration between Alibaba, Malaysia Digital Economy Corporation, and the city council of Kuala Lumpur, aims to reduce traffic in the congested city. City Brain in Hangzhou has seen traffic speed up by 15% in some locations.
  • Education: Ruangguru, an online education platform in Indonesia that connects students and teachers for online tutoring, and provides other services such as video content on a wide range of subject areas. It uses AI to personalise education for its 15 million students, 80% of whom are outside of urban areas.
  • Food security: In Vietnam, startups are using AI and IoT sensors to increase agricultural productivity and save on water and fertiliser use. Sero, a Vietnamese startup, claims an accuracy rate of 70-90% for identifying 20 types of crop diseases, thus lowering the rates of crop failures.

However, across the eleven countries of Southeast Asia, the implementation of (and capacity to) implement AI is uneven. This can be illustrated using the AI Government Readiness Index by Oxford Insights and the International Development Research Centre, a ranking of governments according to their readiness to use AI for administration and delivery of services. Singapore tops the world ranking. Six Southeast Asian countries are within the top 100, including Malaysia (22), Philippines (50), Thailand (56), Indonesia (57), and Vietnam (70).

Country (World Ranking)

Score

Singapore (1)
~9.186
Malaysia (22)
~7.108
Philippines (50)
~5.704
Thailand (56)
~5.458
Indonesia (57)
~5.420
Vietnam (70)
~5.081
Brunei Darussalam (121)
~3.143
Cambodia (125)
~2.810
Laos (137)
~2.314
Myanmar (159)
~1.385
Timor Leste (173)
~0.694

Indeed, countries higher up on the list have (or are building) national strategies that aim to capitalise upon the advantages of the technology and to build enabling environments for supporting homegrown AI. Singapore with its National Artificial Intelligence Strategy aims to be a leader in the field by 2030, strengthening the AI ecosystem and providing funding support of more than S$500 million to drive AI initiatives. Other Southeast Asian countries coming up with overarching AI policies include Malaysia (with a National AI Framework coming up in 2020, and a National Data and AI Policy being proposed to the cabinet) and Indonesia (targeting completion of its AI strategy in 2020).

On the other hand, those on the lower side of the spectrum are still struggling with basic Internet access—only 30.3% of the population of Timor Leste is online, while Myanmar has 33.1% and Laos has 35.4%. With that, we see a divide between those who have access to the technologies and those who do not.

While some governments may lag behind in their readiness for AI, corporations are already gearing up to provide services. In general, there is a great appetite in the region to jump on the “smart” bandwagon, which includes the use of AI in improving products and services. The ASEAN Smart Cities Network (ASCN) mooted in 2018, has 26 cities across Southeast Asia aiming to use technology as an enabler for city development. One of the key goals of the Network is to link these cities with private sector solution providers.

In general, the plans and visions look promising: focal developmental areas of the ASCN are to improve social and cultural cohesion, health and well-being, public safety, environmental protection, built infrastructure, as well as industry and innovation.

Artificial Intelligence & AI & Machine Learning
Artificial Intelligence & AI & Machine Learning. Image by Mike MacKenzie via www.vpnsrus.com

Potential risks of AI affecting ESCR

The developmental benefits brought about by AI are contingent upon the implementation. This is also where many potential risks lie. Even though well-known cases of AI safety and harms have not surfaced in our region where the technology is still nascent, we would do well to observe other localities for known problems.

The “Automating Poverty” series from The Guardian, for instance, gives some chilling examples from India, UK, US, and Australia of how automated social security systems assisted by AI can be very dehumanising and penalise the marginalised further. The case from India, in particular, shows us some devastating consequences of faulty implementation in the context of a developing country. The complete transition from a paper system to a digital one has left the poor vulnerable to technological glitches, ranging from electricity blackouts and unstable Internet, to unexplained rejections by the system to disburse social welfare to the deserving. The system covers social protection and medical reimbursements for the poor, and errors have led to starvation-related deaths.

System bias and accessibility

Opaque decision-making with AI on social security can lead to dire consequences and human suffering with little recourse. Southeast Asia is weak in at least two aspects required for better AI-powered decision-making. The first is good training data for machine learning, which the region lacks, due to populations not yet connected to the Internet, or bad quality data. The second is that most of the countries are importers of AI technologies, which means that most of the engineers designing the systems may not understand the local context. As mentioned in an earlier article, these are fundamental problems that have repercussions on human rights.

Most of the engineers designing the systems may not understand the local context

When people depend on technology to access their economic, social, and political life, they are subject to the availability and stability of the technology. As pointed out earlier, the AI divide is there between the haves and the have-nots—those who have limited abilities to build their own technology will have to rely on using technology that may not be built in an accessible manner for them. Accessibility may be considered from many angles. In this culturally rich region that speaks many languages, it is important to cater to all, but such localisation exercises are costly and may not be implemented. Accessibility can also be obstructed by physical or mental disabilities, low education level and digital literacy, or even just a lack of basic infrastructure.

These fundamental issues need to be considered seriously before one jumps into AI solutions.

Not all problems can be solved by applying technology

Technology is not a cure-all

Not all problems can be (or should be) solved by applying technology. As pointed out by The Guardian’s report on India, the root of inefficiencies associated with the previous system was corruption and poor management at a higher level, and not duplicate or fake cards, as was the problem targeted by Aadhaar. When the problem is a deeper, structural problem, a technological solution may divert attention from other reforms needed and create further problems.

In Southeast Asia, the fervour for all things AI has led to statements by top leaders promising to apply AI to all sorts of contexts. For example, Indonesia’s President Jokowi announced that his administration would replace some higher-level civil servants with AI, while in Malaysia, the Education Minister announced that machines would provide schoolchildren with career guidance in the future. It is debatable whether these are the most appropriate solutions to problems faced, and any such moves should be preceded by multistakeholder consultations and human rights impact assessments.

Worsening inequality, optimised by AI

Lastly, when AI is discussed in the context of this region, it is usually seen from the angle of economic growth or displaced jobs. These are two sides of the same coin—corporations gain profit when they are able to replace human workers with machines. Even when workers have not been replaced (yet), we see a trend towards informalisation of work with the gig economy (such as Grab, Go-Jek, or other platforms for freelancers), which is largely unregulated in Southeast Asia, leading to concerns about worker exploitation optimised by algorithms.

Governments in Southeast Asia tend to see AI as a vehicle for economic, rather than social development

It has been noted at forums discussing AI in Asia that governments in the region tend to see AI as a vehicle for economic, rather than social development. It is, therefore, a concern that AI will be used to optimise profit-making for the technology owners at the expense of people and the planet—a scenario not so different from what we have now, but at a faster rate.

In conclusion

In terms of AI impacts on economic, social, and cultural rights, the short answer to the question of whether AI is good or harmful in the Southeast Asian context is, “it depends”. Civil society in the region should understand more and debate about potential benefits and harms, anchored in the challenges and particularities of local contexts.

Will we be uplifting the lives of vulnerable millions with the benefits of AI, or exposing them to systems that will further disempower them? What about their data and associated privacy? The last question will be discussed more in the next article when we talk about AI and civil and political rights.

About the Author

Dr. Jun-E Tan is an independent researcher based in Kuala Lumpur. Her research and advocacy interests are broadly anchored in the areas of digital communication, human rights, and sustainable development. Jun-E’s newest academic paper, “Digital Rights in Southeast Asia: Conceptual Framework and Movement Building” was published in December 2019 by SHAPE-SEA in an open access book titled “Exploring the Nexus Between Technologies and Human Rights: Opportunities and Challenges in Southeast Asia”. She blogs sporadically here.

The post Can’t live with it, can’t live without it? AI impacts on economic, social, and cultural rights appeared first on Coconet.

]]>
https://coconet.social/2020/ai-impacts-economic-social-cultural-rights/feed/ 1