by Allan LEONARD @FactCheckNI (9 July 2017)
The fourth global conference of fact-checking projects took place at the Google Campus in Madrid, 5-7th July 2017, with 188 delegates representing 53 countries. Fact-checkers were joined by participants from the academic and technology sectors, which made for richer and more thought-provoking discussions.
Advanced Google techniques
We were welcomed by Millan Berzosa from Google News Lab, who educated us on some advanced techniques using their online tools, such as digging into Google Scholar, Google Books, Google Earth, and Google Translate (translate those televised protest posters for yourself!). For videos, you can analyse YouTube videos frame-by-frame (comma=back; period/full stop=forward), and there is a way to reverse video search as well as reverse video thumbnail search. All useful additions to a fact-checkers arsenal of verification tools.
‘Stitch and bitch’: Review of Code of Principles process
Other delegates attended classroom sessions on creating a database of fact-checked stories and on exercising best practices for fact-checking. I joined fellow signatories of the International Fact-Checking Network’s code of principles for a ‘stitch and bitch’ session about the verification process, which was more constructive than complaining — it may have been a hassle for us to compile the requisite information, but making it now easier for our audience to see this is of course for the better.
Using Facebook better
The classroom was turned over to learning good fact-checking impact metrics. While back in the auditorium, Edouard Braud from Facebook showed us how some of their tools could assist our efforts. These included a fake news reporting tool, a false news education campaign, a related articles feature within postings, and their false news detection programme. All are integrated into their Facebook Journalism Project, which Braud encouraged us to access. Yet I was unclear how our individual fact-checking projects could become an official Facebook media partner, in order to use some of the advanced Facebook search features demonstrated.
All of us finished the pre-conference workshop day with a ‘speed-meeting’ session, with every delegate guaranteed to meet three others over 10-minute intervals. It proved a fun and clever way to discover potential collaborations and further signposting. At least mine did!
Mantzarlis began with a concise reflection of the world of fact-checking since the previous global conference, with a pointed remark that ‘post-truth’ is both “preposterous” and “lazy editorialising”. Instead, he argued that fact-checking is about providing the ground rules for democratic dialogue. But Mantzarlis did add a collective challenge, by saying that we should be measuring impact as much as we measure our audience.
Pastor got straight to her point: “All politicians and governments lie.” She added that while not every single politician may lie, all institutions of state will be found lying at some point. Pastor reminded me of a Spanish version of the BBC’s Jeremy Paxman, who is known for his scepticism when interviewing political representatives: “Why is this lying bastard lying to me?” This typical attitude found a receptive audience among fellow journalists in the room; there were no politicians present.
Adair, like Mantzarlis, looked back on the expansion of fact-checking, commenting that Global Fact 1, just three years ago, filled an ordinary classroom; we now needed an auditorium and a wide-angle lens for the group photo. “There’s one thing you can be sure of,” Adair told us, “fact-checking will keep growing!” He added that a new challenge is the emergence of governments and propaganda outfits pretending to also be fact-checkers.
Show and tell
The next session was a set of four ‘show and tell’ examples in the industry.
First up was Julien Pain (France Info), who described and showed a short video of him doing ‘vox-pop’-style interviews of individuals in the streets of Paris, about claims that have appeared on social media. Pain was motivated to do this in order to dialogue into this bubble, and he broadcasts his interviews on Facebook itself (as well as an edited piece for tv). “I now get insulted, which means I am reaching my audience!” Paid concluded.
Michelle Lee (Washington Post) explained how they track President Trump’s falsehoods, which started as a first-100-days project but demand made them to keep this ongoing. The relevant graphic is derived from a database of claims and analysis, which is available for public interaction.
Alberto Puoti (RAI) discussed a video clip he presented of former Italian prime minister, Matteo Renzi, being fact-checked on air, in person. Puoti summarised Renzi’s reaction into three tactics: (1) ridicule the fact-checker; (2) reformulate the statement; and (3) fact-check the fact-checkers. Puoti offered the following challenges to fact-checking on the tv format: (1) securing the guest’s attendance; (2) working out the plot of dialogue; (3) maintaining the relationship with the guest; and (4) the format of the presentation. Like previous speaker Ana Pastor, this reflects a traditional confrontational role of journalists; I wondered if RAI ever had a guest where the claim is demonstrated to be proven true.
Rebecca Iannucci (Duke Reporters’ Lab) took a trip nostalgic, in explaining how pop-up bubbles appearing in a 1990s tv music video programme (VH1 Pop-Up Video) has evolved into live fact-check pop-ups. She demonstrated this with examples from the 2016 US presidential debates, using manual interventions; her aspiration is to have voice recognition technology automatically create relevant pop-ups for any previously recorded video.
The keynote address was by Katherine Maher (Wikimedia Foundation), who suggested a Wikipedia way to fact-checking, ‘enlisting the public and avoiding trolls’. There is some scepticism among the fact-checking community of such an approach — even in our training we advise double-checking the source links in a Wikipedia article. Yet Maher made a strong defence of Wikipedia as a ‘knowledge ecosystem’ — a place where articles are a consensus of the truth of that topic, not an absolute truth. Referencing pundit Stephen Colbert’s term of ‘truthiness’, she said, “Truth changes. Information is volatile.”
Maher argued that their policies of neutrality, verifiability, transparency, and inclusivity is what makes Wikipedia work: “It’s a good thing that it works in practice, because it wouldn’t work in theory!” Her explanation reminded me of the self-regulation of popular Northern Ireland political blog site, Slugger O’Toole, where there is a critical mass of readers who will ensure nothing gets too out of hand: “We all have eyes on each other, not in an Orwellian sense, but to be accountable to each other.”
Furthermore, Maher argued that while facts matter, are they enough to have a conversation in order to achieve a consensus on what is true — that it is good information that gets us what we want.
This was an excellent segue for a following presentation by Briony Swire-Thompson (University of Western Australia), who mooted, “Why do people continue to believe in misinformation?” Swire presented evidence that showed that although individuals over time reverted towards their original incorrect belief about a particular claim, they did not revert completely; they stayed in the corrected-claim sphere:
This was a confirmation of previous speaker Thomas Wood’s (Ohio State University) research that utterly debunked the so-called ‘backfire effect’ of claim corrections. Meanwhile, Emeric Henry (Sciences Po) made an important point that the saliency of the claim topic being scrutinised will affect the adherence of post-correction affirmation.
All of these findings compelled me to present my as yet unproven hypothesis of a ‘hierarchy of truth’, with a base of data and words (“words alone don’t make a poem”), which beget facts, which beget information (“but this is where it gets slippery, as we get information from our own personal experiences”), which beget opinions (“which don’t need to be based on rationality”), which are all trumped by values and beliefs. As Professor Mari Fitzduff once told me, people rarely change their beliefs, but they may be motivated to change their behaviour. I want to explore this cognitive phenomenon.
The next session demonstrated collaborative fact-checking efforts around the world. Helje Solberg (Faktisk) explained how they assembled a ‘team of rivals’; Pauline Moullot (Libération) discussed her organisation’s participation with a Cross-Check project during the recent French presidential elections (and the benefit of access to audiences across competitors); Juan Estaban Lewin (La Silla Vacia) discussed how they reached and engaged audiences in the closed-platform network WhatsApp.
This question and answer session explored why and when fact-checking organisations should collaborate. For Solberg, the competition is not [other Norway media channels], but Facebook and Google. Moullot described how her organisation has collaborated internationally with other organisations, but where there is not the usual competition; she gave the example of Refugee Check (2015). The consensus among the panellists was that whether to collaborate or not depends upon the goal and whether there is a proposed project of mutual benefit; session moderator, Fergus Bell (Dig Deeper Media) likened this to the film, The Avengers!
Automated fact-checking tools
Four more panellists gave practical evidence of applying automated fact-checking tools in their work. Mevan Babakar (FullFact) showed both their live fact-checking tool and their monitoring tool (which reveals who is repeating inaccurate information); Babakar said that these tools are being used to build a body of evidence. Bill Adair (Duke Reporters’ Lab) introduced ‘ClaimBuster’, a set of code that can scan text — such as from the Congressional Record or Hansard or your own — and identify phrases for potential claim research; Adair nicknamed this the ‘robot intern’. Adrien Sénécat (Le Monde) presented the ‘Les Decodeurs’ programme that can categorise types of websites: (1) satire; (2) doubtful; (3) propaganda; and (4) reliable; further explanation is available in their Decodex Verification Guide. Finally, Pablo Martin Fernández (Chequeado) described the challenge of linguistics in the machine learning of language; Joe O’Leary (FullFact) replied with an interesting suggestion of using cases of how automated claim verification failed as a way of improving machine learning.
What do you want from IFCN?
I participated in a breakaway session on improving impact assessment, facilitated by Peter Cunliffe-Jones (Africa Check). We did not reach a consensus, which was not much improvement from a similar session at Global Fact 3 last year. This may be due to the fact that although fact-checking projects have an agreed methodology and sign up to a common code of principles, we do not work to a common objective. Some projects are embedded with mainstream media outlets and serve as ‘the fourth estate’, checking the authority of state institutions; other projects operate where the freedom of the press itself is not respected or even established; and meanwhile there is the debate on how far fact-checkers go from verification to presenting a consensus on ‘the truth’. As ever, context matters.
The rapporteurs from all the breakaway sessions presented an impressive set of ‘top ideas’ (or as Bill Adair put it, the longest to-do list for Alexios Mantzarlis!). My favourites were:
- More (complimentary) cross-syndication of fact-check articles
- A portal to share our published resources
- Tools to identify claims; a hub of tools
- Education smartphone app for journalists and students
- Pre-publication screening tool for fact-check articles
- A common curriculum for fact-checking education courses/training
- Practical advice for fact-checking projects: fundraising, technology, business operations
- Protecting the concept of fact-checking
- Evaluating fact-checking education programmes
- Sharing good practice (via videos and webinars)
- Centralised list of unreliable websites
Share the Facts
Bill Adair (Duke Reporters’ Lab) gave a demonstration of the ‘Share the Facts’ widget, as an effective means of increasing exposure to your fact-check articles. An unintended consequence, he explained, of more organisations implementing this widget is the creation of a global database of these articles. Later, a participant asked a Google representative whether the creators of this database (we who publish the original articles) can have access to the database itself.
Fact-checking in the classroom
The final session of the day was on fact-checking in the classroom. Gabriela Jacomella (Factcheckers.it) described how she created a classroom package; I was intrigued by their Play/Decide card game, with its aim at increasing the participatory process. Cristina Tardáguila (Lupa) outlined her ‘products’: chargeable workshops lasting 2/4/6/8 hours, as well as a MOOC (massive online open course) lasting 4 weeks with 18 videos and weekly exercises and forum discussions. Matt Oxman (Informed Health Choices) explained his organisation’s analysis of an extensive project in Uganda, working with school children: “Adults have less time to learn, and they have to unlearn.” Hache Merpert (Chequeado) presented their ‘teens strategy’, which was a combination of the education system, informal learning (non-school), and social media; indeed, for Merpert the passion he experienced at weekend camps led to his belief that the way forward was to get young people who are passionate about politics to write better speeches using facts.
The subsequent audience discussion included what and who to teach (create a long list of concepts then ask educators to prioritise; 12-18 year olds are a good target for critical skill training); how to prevent recipients from not trusting any news (highlight reliable evidence; don’t publish only claims proven false); and the dimension of ethics (this is addressed within media literacy programmes, beyond fact-checking in the classroom).
Although I missed some of the morning sessions (blame a team bonding session with FullFact the night before!), I made it for Tom Rosenteil’s (American Press Institute) review and proposal for fact-checking — that we should spend more time on maximising network spread (as ‘fake news’ creators do).
Rosenteil argued that we need to move from a literal dimension of fact-checking (claim verification) to a contextual level of providing information — to go from claim-centric to issue-centric research. He suggested having ‘understanding an issue’ as the atomic unit of a fact-check article. (This reminded me of Maher’s (Wikimedia Foundation) presentation and her framework of a crowdsourced ‘consensus of truth’.)
Rosenteil also suggested that fact-checkers work more with local communities, in order to identify and prioritise issues of interest, rather than react to discourse agendas set by politicians. Phoebe Arnold (FullFact) provided an example via a freshly posted tweet, visually mapping out a contemporary discussion on social care. Likewise, at FactCheckNI our primary stakeholders are voluntary and community sector organisations, reflecting the values of our grantor, Building Change Trust. Bill Adair (Duke Reporters’ Lab) summed Rosenteil’s presentation with a complimentary remark about this two-prong goal strategy: (1) better-informed citizens make better decisions; and (2) keeping political representatives accountable.
[Fact checker’s aren’t fond of the term ”Fake News“!]
Claire Wardle (First Draft News) began the discussion by announcing that ‘fake news’ is a swear word, forbidding the panellists from uttering the phrase (more familiar swear words were then heard!). David Mikkelson (Snopes.com) said that a drawback of a non-journalist approach to fact-checking is that it is reactive and can’t keep up with the proliferation of ‘non-fact-checked news’ (avoiding the swear word); he suggested being more proactive and write up longer articles, in order to get more good information in front of eyes (perhaps akin to public service announcements?)
Clara Jiménez (Maldita.es (El Objetivo)) explained how they “clean the shit off the streets” by engaging with users in their platforms, such as Whatsapp chains and Facebook walls; this led to users themselves correcting false claims of others and a subsequent forum to assist and support such users. Mehmet Atakan Foça (Teyit.org) described fact-checking as the antidote, “but we need to do more teaching for people to do it themselves”.
The subsequent question and answer session focussed on a Venn diagram by Mantzarlis, between spheres of ‘debunking’ and ‘verification’. Wardle asked for a better term for the overlapping sphere; someone suggested ‘truth warriors’. Another suggested that the sphere encompassing everything was simply called ‘journalism’.
PS. Post-conference I read an article the offers an acceptable alternative phrase to ‘fake news’: Junk News.
Aine Kerr (The Journalism Project, Facebook) explained how Facebook’s mission includes creating ‘informed communities’, where it wanted to “amplify the good and mitigate the bad”. She then reviewed several tools put forward to achieve this, including fake news education, ‘Perspectives’, and ‘Related Articles’.
Philippe Colombet (Google) began by suggesting that “building a more informed world takes journalists and technologists working together” — that this has been the case in the past and will remain so. He reviewed tools of the Google News Lab: trust and verification; data journalism; immersive storytelling; and inclusive media.
Mantzarlis asked the speakers what they saw as particularly challenging for the next two to five years. Colombet replied, “Defending the open web.” Indeed, as both Facebook and Google are providing tools to fact-check claims found and propagated on the open internet, a significant threat remains on closed platforms, such as Whatsapp and Snapchat. Meanwhile, Kerr appeared receptive to the audience request for complimentary credit for their ‘Boost’ advertising scheme, such as the Google AdWords perk that non-profits organisations enjoy.
Aaron Sharockman (PolitiFact) should win a prize for presenting a slide with his own authored quotation! “You cannot begin to charge for something until you know what it actually costs.” His abiding message was: “Fact-checking is hip. Your work has value.” Sharockman described PolitiFact’s membership scheme, which emphasises access and unique experiences over handouts like stickers and mugs. A final tip when recruiting supporters was to ask them why they are joining — you learn their motivation, which can help developing that donor relationship.
I was good taking notes through most of the sessions, but as Alexios Mantzarlis began his speech I became mesmerised by the power of his words. He made a formidable defence of the crucial importance of the project work of everyone in the room, acknowledged by sincere collaboration with major platform providers and reflected by the geography and reach of our individual organisations:
“In the past three days I have been reminded of the particular combination of self-criticism, dedication, curiosity and commitment that make up your average fact-checker. I am proud of this community. We will falter and we will fail, but I know we will keep on asking the one question that guides our work: Where are the facts?” our friend and comrade Alexios finished.
This message was underlined by Bill Adair in his own remarks, who told us that he gets emotional when he thinks of the inspiration that our work provides. Adair gave thanks to Mantzarlis, who justly deserved the standing ovation.
I came to Global Fact 4 to see known colleagues and with the hope of meeting new ones with mutual interests. I did. My objectives of discovering potential lines of collaboration were facilitated by the speed-meeting and breakaway sessions, with follow-up conversations during the breaks and evening dinners and receptions. There was plenty of flexibility in the agenda to seek one’s interests.
Thought nourishment was abundant. Mantzarlis could not have done better in the all-star line-up of guest speakers and panellists. Every single one was interesting to listen to and learn from. I have many notes annotated with follow-up arrows.
One concluding thought of mine is that there is not a singular context of fact-checking — country and cultural context matters. But as each of our fact-checking projects wages a campaign for truth, we can continue to learn from our successes as well as our failures (so we can be evermore effective).
As Bill Adair said in a presentation he gave as a guest of ours in Belfast, we do not live in an age of post-truth; we live in the age of the fact-checker.
With the International Fact-Checking Network providing peer-to-peer support as well as moral and professional guidance through its code of practice, may we ensure that this motto endures.