LiarLiarLiar.org is an effort to

—make it easier to search fact checkers, by searching many at once. It’s a bit like searching Kayak for airfares instead of going to one airline website after another.

—make it easier to check someone’s reputation for passing around true statements of fact, or falsehoods, about public issues. It’s a bit like checking someone’s reputation as a seller on Ebay before buying something.

Logic and background of the current design:

Purposes

Related prior efforts

Disinformation, and current attempts at remedies

The potential, and some caveats

Conclusion

Purposes

LiarLiarLiar will offer two free services, assuming some redesign following the current critique/discussion phase, but not a complete change in focus. As currently contemplated, the services are essentially the same whether the site is developed as a .org (as currently named), i.e. as potentially grant- or donation-supported, or is reformulated as a .com, i.e. as a startup company.

The first service is simply designed to make fact-checking easier. Up to now, searching fact checkers has been a lot of work, sometimes for little result. There are dozens, to begin with, and it takes time to go to one after another. And often the resulting discovery is that the fact you want to check hasn’t even been looked at by any fact checker you’ve found. This site changes that, by making it easy to search across any number of fact checkers at once, and get a combined report quickly. It also makes it easy to focus on just the fact checkers you trust.

The second service is designed to make it much easier to check whether someone (or some organization) has a good or bad record for telling the truth about public issues. In other settings, this kind of check on someone’s reputation is routine. All of us are already rated for all sorts of other purposes, e.g. as renters by Airbnb, or as taxi riders by Uber, and more. But there has not previously been any kind of service that would make it easy to find out whether a given person is in the habit of passing on misinformation or outright disinformation. Some might say that’s even more important to know than whether a person you’re interested in has been an untidy guest, or is rude to taxi drivers. 

Today, every person or organization who has an active account on many social media is creating a public, written record of statements that can be searched. The same is true for anyone who contributes to blogs and many other forms of publication. Fact-checkers, however, are typically designed to deal with one asserted fact at a time. It’s expensive to pay staff to research fact questions carefully, so every fact checking organization has to make choices as to what to cover—so you could search more than a few without finding any that have looked at the particular fact question that concerns you. What’s more, they typically aren’t set up to track who is still spreading around an already debunked piece of nonsense. So for an individual reader, it would be all but unworkable to use fact-checkers to develop a rounded sense of someone’s overall veracity.

Compiling, recording, displaying and preserving different fact-checkers’ conclusions, however, are jobs that computers can be programmed for. The same is true for checking what someone has written, across any number of different social media and other sources. And a computer can do these jobs fast enough to avoid testing the patience of a user who just wants a quick answer. 

This design has its origins in long experience of conflict management, together with growing familiarity with grey zone conflict. See particularly Project Seshat and Convenor Conflict Management. The design’s evolution has already benefited greatly from advice by experts in cyber and other areas of security, as well as in research methodology.

We have also been active in previous efforts that relate more specifically to people’s different interpretations of facts. But in trying to help resolve people’s differing interpretations of facts, those prior efforts have generally assumed that the difference of opinion is held in good faith. Honeyman, Adler et al Deliberation Engine/1 is an example of such a tool design. A related stream of writing, e.g. Kaufman, Honeyman & Schneider past articles/2 on causes & possible remedies for ineffectiveness of the conflict management field in the biggest disputes, also tends to assume good faith.

However, we’ve known for a long time that not everyone who speaks publicly is doing so in good faith. Disinformation campaigns are real and have existed for a long time. But there have been major recent changes in how easy it is to create or enlarge such a campaign. Misinformation, while less strategic in its origins, can also now spread much faster and farther than was true even a decade ago.

Disinformation, and current attempts at remedies

Many people are particularly concerned about the rise of disinformation as a disciplined, highly organized tactic — particularly its grey zone conflict (or hybrid warfare) uses, where it could even be argued to be the central element among many tactics. And there are growing efforts to combat it. However, most of these try, in effect, to deal with one individual lie or distortion after another. This is inherently vulnerable to the age-old saying that a lie can travel halfway around the world while the truth is still putting on its shoes./3 Even a somewhat speedier version of fact-checking, the helpful Community Note approach now used by X (formerly Twitter), and its equivalents on YouTube and elsewhere that allow volunteer fact-checkers to comment in real time on new postings, suffers from a structural inability to address a pattern of repeated falsehoods by the same author.

A second set of approaches focus on mass public education. While this has had its successes among early users, e.g. some Scandinavian school systems, all of these strategies to date have been complicated, expensive and slow to spread.

In the meantime some “frequent flyers” in the disinformation trade have outrun systems designed to combat them, by deliberately spreading more false statements than anyone can bat down — a tactic known as flooding the zone. Up to now they have suffered few clear penalties for doing so. The same lack of penalties is true for their followers.

But suppose we were to change both the assumption of good faith and the focus on famous/notorious actors as transmitters of lies? Concentrating on incentives for making good/bad faith statements, and on the often-overlooked followers and retransmitters of disinformation, may be more fruitful.

The potential, and some caveats

Right up front, a reality check is in order. A “reputation engine” such as that suggested here is unlikely to have a direct influence on the most notorious among political actors. Some are simply too big, and have effectively insulated themselves by cultivating followers who have already dismissed many previous findings of their leader’s untruthfulness, for whatever reasons. Some more philosophical cautions can be found at A Note about Facts.

However, major actors do not operate in a vacuum. Instead they have often been described as operating in an echo chamber, in which their statements often originated with other players who are relatively obscure (particularly when the “fact” in question is part of a hybrid warfare campaign) and in turn are amplified by others who are also not such public figures. And beyond even those, and in much larger numbers, are people who are gullible but not personally malevolent, and who pass on the lie or distortion without knowing it to be so — but also without checking into it. Disinformation campaigns depend on adoption by “unknown” people for their reach and power.

We believe it should be possible to change the incentives of at least some of those involved, particularly the thousands who are merely passing on bogus information. The process is a bit like checking a landlord’s reputation on Airbnb, or a driver’s on Uber, or a seller’s on Ebay — not that hard, in other words. But the result may be to discourage disinformation campaigns, by creating a long-term price for passing along disinformation.

An engine such as this is likely to be most effective by focusing on people who probably didn’t create the disinformation in the first place, i.e. followers. Fortunately, this provides some leverage that’s lacking with “major” figures. Their followers do not have the same ways of gaining fame or wealth by constantly campaigning in public. So the followers typically gain very little personally from being part of a disinformation campaign. And yet they have something to lose.

For example, some of them tend to be quite young, particularly among social media influencers. The prospect of a possible future romantic partner being able to look up easily whether someone is a consistent and public liar, with a reputation “score” that’s easily calculated, might itself become well known quite quickly. Such a score is also likely to be checked by others, perhaps including casual acquaintances as well as prospective employers, landlords and more. After all, prospective landlords and employers, and yes, possible romantic partners, already routinely check people out using various other tools. From their point of view, this adds one more opportunity to understand someone better before investing time or resources in them. Similar incentives could apply to many middle-aged and older people, whose desire for a reputation for probity within their local faith community, local parents’ and civic groups, their neighbors and again, possible future employers (etc) may be equally influential.

The prospect of getting a reputation for falsehoods that can be so easily searched has potential to become a significant disincentive to spreading untrue statements. A possible analogy is the rise of convenient ways of keeping close tabs on your own credit history. This is not limited to pressures placed in recent years on traditional credit reporting services, such as Equifax or TransUnion, to make everyone’s reports freely and conveniently available to the person involved. A whole industry has arisen in response, with Credit Karma just one example among multiple competitors; and as a result, many people are much more conscious of their credit ratings now.

Of course, one very desirable possible effect of greater convenience in searching out someone’s reputation is to encourage everyone who is active on social media to check a supposed fact for themselves before passing it on — especially if fact-checking could be made easier. So we offer that service too.

It’s worth emphasizing that many who pass along pieces of disinformation are doing so quite casually and even thoughtlessly. It’s not as if every person distributing a conspiracy theory or blatant lie has the same level of motivation as the person who thought it up, who may well be a Russian or Chinese military disinformation specialist, or a shadowy hacker group subcontracted for that purpose. Up to now there has been no practical way of tracking an ordinary person’s passing-along of such material over time, so there has been little in the way of a price for such thoughtlessness. But once there is a price, even a modest one, that could become a significant disincentive for people whose rewards for promoting lies are also modest./4

Conclusion

Making it easier to check facts can only increase people’s willingness to devote at least minimal time and effort to checking one, before hitting “send” or the equivalent. At the same time, making it easier to compile political actors’ statements and sources of influence over time would help to penalize deliberate liars and conspiracy theorists by ensuring that their past statements can’t be buried so easily by a “flood the zone” strategy. But probably more important, making such a check easier would tend to reduce the number of people prepared to pass along such lies without stopping to think. This is a potential weapon against disunity in Western democracies, and particularly against disinformation campaigns conducted for hybrid warfare purposes. 

———————————————————————————-


/1 See Christopher Honeyman, Peter S. Adler, Colin Rule, Noam Ebner, Roger Strelow, & Chittu Nagarajan. 2013. Chapter 24: A Game Of Negotiation: The “Deliberation Engine”. In Honeyman, C., Coben, J., and Lee, A. W-M. Educating Negotiators for a Connected World. DRI Press. Available at https://open.mitchellhamline.edu/dri_press/4/

/2 See Kaufman, S., Honeyman, C. and Schneider, A. K. 2017. “Should they Listen to Us? Seeking a Negotiation / Conflict Resolution Contribution to Practice in Intractable Conflicts.” Journal of Dispute Resolution, 2017 Symposium Issue. Available at https://scholarship.law.missouri.edu/jdr/vol2017/iss1/9/

—-Also see Kaufman, S., Honeyman, C. and Schneider, A.K., (2007) “Why don’t they listen to us? The Marginalization of Negotiation Wisdom”, in Dupont, C., ed., Négociation et Transformations du Monde, Éditions Publibook, Paris.

/3 A recent Washington Post article efficiently summarizes ways of nailing down a fact or falsehood circulating in social media. Unfortunately it also shows just how much work that is, using currently available methods. See https://www.washingtonpost.com/technology/2024/misinformation-ai-twitter-facebook-guide/ (requires subscription)

/4 It’s worth noting that not all the people who may be revealed as serially untruthful by an engine such as this are casual. The proposed design allows for the possibility that some people will be inclined to try to suppress the dismaying findings about themselves, perhaps by suing somebody — a type of “lawfare” increasingly common in a hybrid warfare campaign. But the design is structured to encourage anyone so inclined to think again. Since the engine simply reports the balance of factual findings made elsewhere, and makes its calculations transparent, anyone inclined to be litigious would have no clear line of attack in court. Meanwhile even an attempt to bring a case would invite additional publicity for exactly the findings they do not want reported. And since in this design the user determines which fact-checkers to weigh heavily and which to ignore, that autonomy means that it’s likely to be even harder to sue the engine successfully, because the engine could well be viewed as a kind of common carrier: it’s the user who decides where to start, and where to go. A possible attempt at “lawfare” should not be discounted. But it is doubtful that the gambit would work out well for an attacker.