Singapore university to establish facility that studies social impact of internet
The National University of Singapore (NUS) is establishing a new facility that aims to tap artificial intelligence (AI) and data science technologies to study the implications of internet use on the local population. Its researchers also will assess regulations such as those involving privacy and freedom of expression enacted to curb misinformation.
Scheduled to begin operations next month, the new Centre for Trusted Internet and Community would tap social and behavioural science research, digital technologies, data analysis, and policy research to determine the impact of internet use on society, said the university in a statement Thursday.
Located on campus ground, the facility would look across three aspects–technology, human, policy–to evaluate the research and development (R&D) of technologies to “create an accountable world wide web” and “ascertain the trustworthiness of information”.
The university’s deputy president for R&D, Chen Tsuhan said: “Currently, most internet research programmes around the world adopt a singular approach, focusing only on technology, the human, or policy aspects. Such an approach may not be optimal, given the inherently multidisciplinary nature of Internet studies.”
The new facility also would involve various stakeholders, including technology companies, the community, and government agencies, in its aim to contribute insights on public security, public discourse, and public education, Chen said.
The centre would be helmed by computer scientist Lee Mong Li, as director, and communications and new media professional Audrey Yue as deputy director. They would be joined by more than 20 academics with expertise in areas such as computer science, psychology, public policy, and law.
Its research teams would examine and develop insights, tools, policies, and best practices around the responsible use of the internet, with the aim to promote responsible public discourse, protect individuals against online falsehoods, and establish the trustworthiness of information sources, said NUS.
In its use of technology, the centre’s researchers would leverage AI and data science technologies to study the implications of the internet, including the detection and mitigation of the spread of false information through online media, and the automatic assessment and verification of the trustworthiness of information sources.
Its researchers also would conduct studies to improve understanding of individual cognitive and socio-psychological factors that motivated the producers and consumers of information, which, in turn, could influence the consumption and spread of information as well as misinformation, patterns of information use, and the dissemination of information.
In addition, they would look at regulations that drove the design and use of the internet, as well as emerging institutions and processes of internet governance that could be used to discourage and mitigate misinformation.
With the insights gained from such efforts, the NUS centre hoped to apply them to address public security, such as the digital divide as well as how bias and misinformation could be weaponised. It also would look to nurture a culture that values facts over misinformation through public lectures and commentaries.
To kick off its work, the centre’s team of researchers had begun looking at public discourse amidst the coronavirus outbreak in Singapore. They would be tapping AI to analyse the credibility of online claims related to the virus as well as to examine tagging and sharing trends on social media involving the virus outbreak. The aim here was to identify patterns of information dissemination in the city-state.
Singapore evokes online falsehoods law while Malaysia makes arrests in their attempt to stem the spread of inaccurate reports concerning the coronavirus.
Singapore government’s decision to block access to a Facebook page ‘contradicts’ its previous claim that the legislation would not be used as a censorship tool, says the social network.
As expected, the Singapore government has voted to pass the Protection from Online Falsehoods and Manipulation Bill despite strong calls from industry observers and global technology companies to reassess the legislation, which they say gives the government far-reaching powers over online communication.
Singapore government is setting up an advisory council to look at the “ethical use of artificial intelligence and data” as well as recommend codes of practice and governance.
Regulators say the Trusted Data Sharing Framework aims to resolve challenges companies face in sharing data assets, including the need to ensure regulatory compliance as well as a lack of methodologies and trust with whom they shared data.