THE World Wide Web has grown and evolved in tandem with technology. The role of netizens have changed where they are no longer just consumers of information but also contributors, curators and distributors of knowledge.
In the past, the skills of an information technology (IT) expert may have been required to resolve technical glitches in our personal computers. Today, a simple Google search or a visit to Stack Overflow will return a myriad of results where other netizens who have experienced similar problems may hold the keys to unlocking our problems.
However, social media and online discussion forums have given a voice to everyone without necessarily holding them accountable for what they say. These platforms have also become fertile ground for unscrupulous individuals to spread misinformation and fake news while hiding behind fake profiles.
A study by the Annenberg Public Policy Center of the University of Pennsylvania recently revealed that “people who rely on social media for information were more likely to be misinformed about vaccines than those who rely on traditional media”.
The dissemination of fake news will remain an existential threat, especially in matters involving politics, owing to its highly partisan nature. However, there is an urgent need for some form of moderation and verification of online solutions provided by forums and discussion boards.
“Platforms like Reddit and Quora are examples where you can ask a question, and other users will provide you with an answer or engage in a discussion. But you’re going to have a lot of answers, and you will ask yourself who you can trust and believe,” says Dr Ian Lim Wern Han (left) from the School of Information Technology, Monash University Malaysia.
Lim is currently working on a project that aims to extract valuable information from these platforms, particularly tacit knowledge that is hard to come by from traditional sources of information.
“There are too many social media platforms, with hundreds of thousands of threads and comments. It is difficult and extremely costly to process them one by one, especially given their unstructured nature. So, I looked at it from a user’s point of view, where I identified trustworthy users. In other words, what I am doing is a measure of trust and reliability of other people online based on my profiling methods.”
Lim’s work is premised on assigning numbers to users of online discussion forums to indicate the reliability of what they post. His idea rewards people who are credible and punishes those who are peddling lies. The reward or punishment revolves around visibility or engagement. If users are credible, their posts can be placed higher up. Their votes could also be worth more when they vote on other threads. But, if a user is not credible, their post can be placed lower in the page or even hidden from the public, and their votes could be worth less or nothing at all.
“On Reddit, users can be ‘shadowbanned’. Their posts will not be shown to the public unless approved manually by moderators. Likewise, their votes wouldn’t count. But the users themselves would not know this because it is still visible to them,” explains Lim.
The accuracy of the approach is enhanced using measures of confidence and volatility on a complex network of interactions. This ensures that the most credible sources of information appear at the very top of a thread on online forums. The same rating can also be used to match the questioner with suitable experts.
For this research, Lim collected more than 700,000 threads for one and a half years from Reddit – involving almost two million users. His approach profiles each individual with a rating on the first day. These numbers were then used to predict a user’s contribution on a subsequent day. The figures were updated each day, and the process was repeated in the ensuing days.
“This was a continuous process. We test whether or not I am accurate as I learn more about the user. On a real-world dataset of one-and-half-years, involving four very different communities, we found out that it is, in fact, possible to predict how a user performs in the future.”
Lim looks forward to using the methodology he has created to profile influencers on social media. He cites the ongoing debate in the United States as to whether face masks are compulsory, as a case in point, where choosing the right influencers to disseminate public service announcements, is critical.