Fact-checking is a good thing, but who fact-checks the fact-checkers’ methodologies? How do we know if the fact-checking methods actually yield accurate results?
In order to determine the validity of the methods that go into testing whether a source is telling the truth or not, the Intelligence Advanced Research Projects Activity (IARPA) set forth a challenge.
There are many so-called fact-checking organizations on the web. Many people turn to sites like Snopes when they have a “hoax or not” type of question.
They pull up the homepage, ask the question, and the results are given, but hardly anybody thinks about how the truth is actually determined.
How does a site like Snopes determine what is true and what is fake? How does a site like Snopes make money? Are there potential external influences that might persuade the fact-checkers to lean a certain way? Are there possible flaws in the way fact-checkers evaluate the truth?
IARPA CASE Challenge
Through IARPA, the US intelligence community is looking for ways to assess and improve upon current methods of testing that go into determining whether a source is truthful or not.
The IARPA Credibility Assessment Standardized Evaluation (CASE) Challenge launched on January 3, and it is an open competition that offers up to $125,000 in prizes to participants who can develop a novel and creative method for measuring the accuracy of current and future credibility assessment techniques used across government, academia and industry.
This challenge is focused on the methods used to evaluate credibility assessment techniques or technologies, rather than on the techniques or technologies themselves.
The CASE Challenge about focusing on how to assess a broad range of tools and techniques, but not on developing specific credibility tools and techniques.
Credibility assessment refers to both the assessment of the truthfulness of specific claims and to the assessment of the reliability, honesty, and trustworthiness of a source of a particular claim, whether that be an individual, group, or a broader organization or entity.
In theory, the results of this challenge could be applied accross a wide variety of industries, including journalism.
Now, more than ever, people are focused on verifying who, or what, is honest and trustworthy. We see this reflected everywhere from the national media to our professional and personal conversations – every day we must make decisions about someone’s credibility.
According to IARPA, “For some in-person and virtual interactions there are tools to aid our judgments. These might include listening to the way someone tells a story, looking at a user badge, validating with other people – or in more formal settings, verifying biometrics or recording someone’s physiological responses, i.e. the polygraph.
“Each of these examples uses a very different type of tool to augment our ability to evaluate credibility. Yet there are no standardized and rigorous tests to evaluate how comprehensive or accurate such tools really are.”
Where The Truth Lies
There are many tools to help us determine the truth such as body language, polygraph tests, authority figures, common sense, and past experiences, but none of these tools on their own can be 100% accurate 100% of the time.
The US intelligence community is looking to make fact-checking and truth-telling methodologies more accurate.
However, the search for truth seems to be more in the realm of philosophy, religion, and the arts rather than in a government challenge. Perspectives and paradoxes abound.
Like with the famous double-slit experiment, is it a particle, or is it a wave? Depending on your point of view and manner of measurement in space and time, it could be both simultaneously!
Where does the truth lie?