PETER: About a Chatbot named Checkbot

 

CAN ALGORITHMS SAVE US FROM DISINFORMATION?

 

DECEMBER 24, 2020 — HANNAH RICHTER

Illustration: Lucie Ménétrier

Illustration: Lucie Ménétrier

 
 
Disinformation is designed to manipulate.

59% of social media users will share an article without even reading it. By reposting an article solely based on the headline, users run the risk of spreading disinformation. Disinformation is intended to do harm, it is designed to manipulate. The stories are often entirely made up or use deliberately manipulated content. But it is not only an individual sharing an unread article that helps to disseminate disinformation. It is also an algorithm. On social media, algorithms sort posts on an individual's news feed by way of relevance, showing what they think the individual will want to see. 

Peter Jančárik, from an organisation called Seesame, in Slovakia, believes that: “we live in a world where algorithms really shape our reality, sometimes for the better, but many times for the worse”.

Watch Peter introduce himself here:

Cinematography: Jordy Nijenhuis

Jančárik has been fighting disinformation since 2010, launching his latest project to combat it just a year and a half ago. His organisation built their own algorithm in the form of a chatbot, to engage people in a discussion about what is a relevant piece of information and what is, in his words: “bullshit”. Something that started out as an experiment slowly proved to be beneficial. 

Their chatbot, named Checkbot, tries to mirror offline media literacy workshops, by guiding the user to spot ‘fishy’ pieces of information and help them distinguish what might be true and relevant, and what probably is not. Seesame came up with the idea because Facebook was doing such a poor job of fact checking in Slovakia; they thought that if Facebook can’t fix itself, they would try to fix it by creating a chatbot.

 
We live in a world where algorithms really shape our reality, sometimes for the better, but many times for the worse.

When you visit Checkbot on Facebook Messenger and send a link to a website or a picture that you are unsure about, Checkbot does not tell you if it is true or false per se. Instead, Checkbot guides you through a process to judge this yourself. In essence, it is a learning tool, designed to be used only once or twice by the user, in order to practice the cognitive thinking that allows them to distinguish disinformation for themselves. 

As so many individuals are using social media to access the news these days, and with a huge proportion using Facebook as their primary source, these platforms play a major role in contributing to polarisation and the spread of disinformation; they need to be held responsible. The algorithms filter the information they show and don’t show to people, set in a way that promotes polarising posts. And they are doing very little to combat this, or, as Jančárik says: “nothing substantial anyway”. So, despite Facebook’s huge responsibility, they will never actually accept it. 

How does Checkbot precisely work? Peter explains it here:

Jančárik doesn’t think it’s good to encourage people to turn to algorithms all the time, and wanted to avoid users becoming a “slave to the algorithm”, so he wasn’t too keen to solve the problem of algorithms by creating another one. However, by creating a learning process and raising awareness, Checkbot is doing a crucial job that, according to Jančárik, should really be carried out officially by all social media platforms themselves. 

When asked whether this kind of software is the way forward in trying to uncover disinformation and stop the spread of it online, Jančárik says he does believe it’s part of the solution, but that it is not the ‘silver bullet’. You cannot outsource years of media literacy and critical thinking skills to a chatbot, which is something Seesame is quite realistic about.  

 
Checkbot is a small piece in the puzzle.

In times of crisis, like the current pandemic, Jančárik found that in Slovakia, there was a change in consumption of media, with individuals seeking information from trusted sources of mainstream media. However, as Covid-19 progressed, and the sickness and death rates remained relatively low in their country, Slovaks were gradually starting to ask if the pandemic was real, resulting in an upsurge in healthcare disinformation sites. These sites were initially silent at the beginning of the pandemic but are now back up and running as usual. The change in consumption of media was, unfortunately, short lived for the country.

So, can algorithms save us from disinformation? Not really, but Checkbot is at least a small piece in its puzzle.


 
Hannah Richter