The Triage of Truth: Do Not Take Expert Opinion Lying Down, by Julian Baggini

Brain illustration from The Principles and Practice of Medicine…’ by W Osler, 1904, public domain via Wikimedia Commons

The thirst for knowledge is one of humankind’s noblest appetites. Our desire to sate it, however, sometimes leads us to imbibe falsehoods bottled as truth. The so-called Information Age is too often a Misinformation Age.

There is so much that we don’t know that giving up on experts would be to overreach our own competency. However, not everyone who claims to be an expert is one, so when we are not experts ourselves, we can decide who counts as an expert only with the help of the opinions of other experts. In other words, we have to choose which experts to trust in order to decide which experts to trust.

Jean-Paul Sartre captured the unavoidable responsibility this places on us when he wrote in Existentialism and Humanism (1945): ‘If you seek counsel – from a priest, for example – you have selected that priest; and at bottom you already knew, more or less, what he would advise.’

The pessimistic interpretation of this is that the appeal to expertise is therefore a charade. Psychologists have repeatedly demonstrated the power of motivated thinking and confirmation bias. People cherry-pick the authorities who support what they already believe. If majority opinion is on their side, they will cite the quantity of evidence behind them. If the majority is against them, they will cite the quality of evidence behind them, pointing out that truth is not a democracy. Authorities are not used to guide us towards the truth but to justify what we already believe the truth to be.

If we are sincerely interested in the truth, however, we can use expert opinion more objectively without either giving up our rational autonomy or giving in to our preconceptions. I’ve developed a simple three-step heuristic I’ve dubbed ‘The Triage of Truth’ which can give us a way of deciding whom to listen to about how the world is. The original meaning of triage is to sort according to quality and the term is most familiar today in the medical context of determining the urgency of treatment required. It’s not infallible; it’s not an alternative to thinking for yourself; but it should at least prevent us making some avoidable mistakes. The triage asks three questions:

  •  Are there any experts in this field?
  •  Which kind of expert in this area should I choose?
  •  Which particular expert is worth listening to here?

In many cases there is no simple yes or no answer. Economic forecasting, for example, admits of only very limited mastery. If you are not religious, on the other hand, then no theologian or priest can be an expert on God’s will.

If there is genuine expertise to be had, the second stage is to ask what kind of expert is trustworthy in that domain, to the degree that the domain allows of expertise at all. In health, for example, there are doctors with standard medical training but also herbalists, homeopaths, chiropractors, reiki healers. If we have good reason to dismiss any of these modalities then we can dismiss any particular practitioner without needing to give them a personal assessment.

Once we have decided that there are groups of experts in a domain, the third stage of triage is to ask which particular ones to trust. In some cases, this is easy enough. Any qualified dentist should be good enough, and we might not have the luxury of picking and choosing anyway. When it comes to builders, however, some are clearly more professional than others.

The trickiest situations are where the domain admits significant differences of opinion. In medicine, for example, there is plenty of genuine expertise but the incomplete state of nutritional science, for example, means that we have to take much advice with a pinch of salt, including that on how big this pinch should be.

This triage is an iterative process in which shifts of opinion at one level lead to shifts at others. Our beliefs form complex holistic webs in which parts support each other. For example, we cannot decide in a vacuum whether there is any expertise to be had in any given domain. We will inevitably take into account the views of experts we already trust. Every new judgment feeds back, altering the next one.

Perhaps the most important principle to apply throughout the triage is the 18th-century Scottish philosopher David Hume’s maxim: ‘A wise man … proportions his belief to the evidence.’ Trust in experts always has to be proportionate. If my electrician warns me that touching a wire will electrocute me, I have no reason to doubt her. Any economic forecast, however, should be seen as indicating a probability at best, an educated guest at worst.

Proportionality also means granting only as much authority as is within an expert’s field. When an eminent scientist opines on ethics, for example, she is exceeding her professional scope. The same might be true of a philosopher talking about economics, so be cautious about some of what I have written, too.

This triage gives us a procedure but no algorithm. It does not dispense with the need to make judgments, it simply provides a framework to help us do so. To properly follow Immanuel Kant’s Enlightenment injunction ‘Sapere aude’ (Dare to know), we have to rely on both our own judgment and the judgment of others. We should not confuse thinking for ourselves with thinking by ourselves. Taking expert opinion seriously is not passing the buck. No one can make up your mind for you, unless you make up your mind to let them.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

~ Julian Baggini is a writer and founding editor of The Philosophers’ Magazine. His latest book is A Short History of Truth (2017). (Bio credit: Aeon)

Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and ad-free, entirely supported by patrons and readers like you. Please offer your support today!

How Can We Answer for Answerability?, by Hannah Tierney

Measles illustration from The Practical Guide to Health by Frederick M. Rossiter, 1908, public domain via Wikimedia CommonsJenny McCarthy is a celebrity in the United States and a prominent anti-vaccine activist. She is the president of Generation Rescue, a non-profit that advocates the view that autism is at least partially caused by vaccines, and has written several books promoting this view. Since 2007, she’s been featured on several media outlets where she’s been asked to defend her views on the relationship between the Measles, Mumps, and Rubella (MMR) vaccine and autism. In many of these interviews, it’s clear that those questioning McCarthy are trying to hold her morally responsible for her view by demanding that she justify her position—by holding her answerable. Despite these numerous calls for McCarthy to justify herself, she hasn’t changed her view on vaccines (Although in a 2010 interview with Frontline, McCarthy clarifies the position of her group). In fact, calling on McCarthy to defend herself in the public sphere arguably only serves to legitimize her views and expose them to larger audiences. Though the CDC determined that measles was eliminated in 2000, due in large part to an increase in the refusal to vaccinate, a record number of measles cases were reported in 2014. If we think that McCarthy’s position on vaccines is incorrect and her advocacy of the position is blameworthy, how can we hold her responsible for her behavior without reinforcing the very behavior we find blam
eworthy?

Cases like these pose a problem for philosophers who work on moral responsibility. Following the work of T. M. Scanlon, many philosophers argue that there is a relationship between moral responsibility and answerability—the demand for justification. Of course, philosophers have argued about how exactly responsibility and answerability relate to each other. But both those who argue that moral responsibility should be identified with answerability (Smith 2012) and those who argue that answerability only captures one facet of moral responsibility (Shoemaker 2011) face a problem.

In many cases, when we attempt to hold someone morally responsible for an action by demanding that they answer for their behavior, the person, rather than see the error in their ways, can become even more confident in their reasons for action and refuse to alter their behavior. This can have quite damaging effects when the behavior in question is dangerous, violent, or qualifies as a public health risk. Such cases place those who defend the relationship between moral responsibility and answerability in a precarious position. If the very means by which we hold people responsible for blameworthy behavior only serves to worsen that blameworthy behavior, then it’s hard to see why we should hold people morally responsible in the first place. And, if the answerability account of moral responsibility can’t easily be operationalized, then perhaps we should look for another theory of moral responsibility. Though those who defend the answerability account have remained relatively silent on how to successfully hold an agent answerable, the behavioral sciences can help address this question. By developing an account of answerability that is informed by this research, answerability theorists can shield themselves from the worry that their view can never be successfully operationalized.

The case of Jenny McCarthy is not an isolated incident. Objecting to people’s beliefs is notoriously ineffective in changing those beliefs. Confirmation bias (Lord et al. 1979)—the tendency to accept evidence that supports one’s previously held beliefs and discount evidence that doesn’t—is a robust phenomena that has been found in a wide variety of contexts. The backfire effect is perhaps even more pernicious, indicating that when given evidence against a belief, people will reject the evidence and hold the original belief even more strongly (Nyhan & Reifler 2010). Asking people to give their reasons for their beliefs is also unsuccessful when it comes to changing their beliefs (Fernbach 2013). But if neither objecting to people’s views nor asking them to provide their reasons causes them to see the error in their ways, how are we to successfully hold people answerable? Is answerability a misguided account of moral responsibility?

Those who defend an answerability account of moral responsibility, whether they think answerability just is moral responsibility or answerability captures only a facet of moral responsibility, remain vague about how we can successfully hold people answerable. Angela Smith argues: “In my view, to say that an agent is morally responsible for some thing is to say that the agent is open, in principle, to demands for justification regarding that thing” (Smith 2012, 578). But we can demand justification in many different ways, and we can do so more or less successfully.  Though asking an agent to respond to arguments against her view or asking her to list her reasons are demands for justification, they are largely ineffective when it comes to getting agents to jettison morally problematic beliefs and curbing morally blameworthy behavior. Are there more effective ways to demand justification from moral agents? This is a question that the behavioral sciences can help illuminate.

One recent study indicates that asking people to explain their beliefs and the policies they endorse is more effective at reigning in extreme beliefs than asking people to respond to objections to their views or listing their reasons for their beliefs (Fernbach 2013). In particular, getting participants to explain the causal mechanisms at play in the political policies they endorse undermines the illusion of deep understanding many participants felt, which makes it more likely for participants to adopt less extreme policy beliefs. Fernbach and his collaborators also found that the call for explanation made it less likely for participants to donate money to organizations that supported their previously held political positions. Not only did the demand for explanation reign in extreme beliefs, it also played a role in changing participants’ behavior.

Answerability theorists may be right that holding people morally responsible should involve a demand for justification. But how we demand justification matters when it comes to altering people’s morally blameworthy beliefs and behavior. Thus, answerability theorists should focus on developing operational views of answerability, which are informed by the behavioral sciences.

~ Hannah Tierney is a Ph.D candidate in the Philosophy program at the University of Arizona. She has broad philosophical interests, but writes mainly on issues of moral responsibility, personal identity, and the self, and is also interested in experimental philosophy and cognitive science

~ Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and is ad-free, entirely supported by patrons and readers like you. Please offer your support today!

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Works Cited

Fernbach, P., T. Rogers, C. Fox, and S. Sloman. 2013. Political extremism is supported by an illusion of understanding. Psychological Science 24: 939-946.

Lord, C., L. Ross, and M. Lepper. 1979. Biased assimilation and attitude polarization: The

effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology 37: 2098-2109.

Nyhan, B. & Reifler, J. 2010. When corrections fail: The persistence of political

misperception. Political Behavior 32: 303-330.

Scanlon, T. M.  2008. Moral Dimensions: Permissibility, Meaning, Blame. Cambridge, MA: Belknap

Press of Harvard University Press.

Shoemaker, D. 2011. Attributability, answerability, and accountability: Toward a wider

theory of moral responsibility. Ethics 121: 602-632.

Smith, A. 2012. Attributability, answerability, and accountability: In defense of a unified

account. Ethics 122: 575-589.