On the Value of Intellectuals, by Brad Kent

“George Bernard Shaw near St Neots from the Millership collection” from the Birmingham Museums Trust, CC BY-SA 4.0 via Wikimedia Commons.

In times of populism, soundbites, and policy-by-Twitter such as we live in today, the first victims to suffer the slings and arrows of the demagogues are intellectuals. These people have been demonised for prioritising the very thing that defines them: the intellect, or finely reasoned and sound argument. As we celebrate the 161st birthday of Bernard Shaw, one of the most gifted, influential, and well-known intellectuals to have lived, we might use the occasion to reassess the value of intellectuals to a healthy society and why those in power see them as such threats.

Born in Dublin on 26 July 1856 to a father who held heterodox religious opinions and a mother who moved in artistic circles, Shaw was perhaps bound to be unconventional. By age 19 he was convinced that his native Ireland was little more than an uncouth backwater–the national revival had yet to see the light of day–so he established himself in London in order to conquer English letters. He then took his sweet time to do it. In the roughly quarter of a century between his arrival in the metropole and when he finally had a modicum of success, Shaw wrote five novels–most of which remained unpublished until his later years–and eked out a living as a journalist, reviewing music, art, books, and theatre. That eminently readable journalism has been collected in many fine editions, and we see in it an earnest individual not only engaged in assessing the qualities of the material before him–much of which was dreadfully insipid–but eager to raise standards and to cultivate the public. He prodded people to want more and gave them the tools to understand what a better art would look and sound like. And he did so in an inimitable voice that fashioned his renowned alter ego: the great showman and controversialist, GBS.

“George Bernard Shaw, circa 1900” from the Library of Congress, Public Domain via Wikimedia Commons.

Shaw became more widely known as a playwright in late 1904, when King Edward broke his chair laughing at the Royal Command performance of Shaw’s play John Bull’s Other Island. He was no longer a journalist by trade, now being able to live by his plays, but Shaw continued to write essays, articles, and letters-to-the-editor in leading papers to set the record straight, to denounce abuses of power, and to suggest more humane courses of action. When he published his plays, he wrote polemical prefaces to accompany them that are sometimes longer than the plays themselves. These prefaces, written on an exhausting range of subjects, are equally learned and entertaining. Indeed, it has been said by some wags that the plays are the price that we pay for his prefaces.

In many ways continuing his fine work as the Fabian Society’s main pamphleteer in the 1890s, his prefaces suggest remedies for the great injustices of his time. And, what’s more, the vast majority of his prescriptions are as topical and provocative today. For example, if you’re American, should you opt for Trumpcare or Obamacare? Read The Doctor’s Dilemma and its preface and you’ll have a compelling case for neither, but rather a comprehensive and fully accessible public healthcare system, the sort now common in Canada and most European countries. That’s right, people were feeling the Bern–we might say the original Bern–well before Mr. Sanders was born.

Some of Shaw’s opinions came at a great cost. When he published Common Sense About the War, which was critical of both German and British jingoism at the outset of the Great War, he ran too much against the grain of the hyper-patriotic press and government propaganda, thereby becoming a pariah to many. But his star gradually returned into the ascendant as the body count mounted and a war-weary population came to share his point of view. The run-away international success of Saint Joan brought him the Nobel Prize for Literature in 1925 and, as Shaw said, gave him the air of sanctity in his later years.

“George Bernard Shaw with Indian Prime Minister Jawaharlal Nehru, May 1949”, from Nehru Memorial Museum & Library. Public Domain via Wikimedia Commons.

However, Shaw always maintained that he was immoral to the bone. He was immoral in the sense that, as a committed socialist in a liberal capitalist society, he didn’t support contemporary mores. Instead, he sought to change the way that society was structured and to do so he proposed absolutely immoral policies. A good number of these beyond universal healthcare have seen the light of day, such as education that prioritises the child’s development and sense of self-worth, the dismantling of the injustices of colonial rule, and voting rights for women. But those in power continue the old tug-of-war, and the intellectuals of today must be as vigilant, courageous, and energetic as Shaw in the defence of liberal humanist and social democratic values. Witness the return of unaffordable tertiary education in the UK, made possible by both Labour and Conservative policies.  We might recall that Shaw co-founded one of these institutions–the renowned London School of Economics–because he believed in their public good.

Whenever Shaw toured the globe in his later decades–he died in 1950 at age 94–he was met by leading politicians, celebrities, and intellectuals who wanted to bask in his wit, wisdom, and benevolence (Jawaharlal Nehru, Charlie Chaplin, and Albert Einstein are a few such people). Time magazine named him amongst the ten most famous people in the world–alongside Hitler and the Pope. Everywhere he went, the press hounded him for a quote. Yet despite the massive fees he could have charged, he never accepted money for his opinions, just as he had declined speaking fees in his poorer days when he travelled Britain to give up to six three-hour lectures a week to praise the benefits of social democracy. He would not be bought–or suffer the appearance of being bought.

On his birthday, then, we would do well to think of Shaw and maybe even read some of his plays, prefaces, or journalism. We might also cherish the service and immorality of intellectuals. And we should always question the motives of those who denigrate their value.

This piece was originally published in OUPBlog: Oxford University Press’s Academic Insights for the Thinking World

Before You Can Be With Others, First Learn to Be Alone, by Jennifer Stitt

In 1840, Edgar Allan Poe described the ‘mad energy’ of an ageing man who roved the streets of London from dusk till dawn. His excruciating despair could be temporarily relieved only by immersing himself in a tumultuous throng of city-dwellers. ‘He refuses to be alone,’ Poe wrote. He ‘is the type and the genius of deep crime … He is the man of the crowd.’

Like many poets and philosophers through the ages, Poe stressed the significance of solitude. It was ‘such a great misfortune’, he thought, to lose the capacity to be alone with oneself, to get caught up in the crowd, to surrender one’s singularity to mind-numbing conformity. Two decades later, the idea of solitude captured Ralph Waldo Emerson’s imagination in a slightly different way: quoting Pythagoras, he wrote: ‘In the morning, – solitude; … that nature may speak to the imagination, as she does never in company.’ Emerson encouraged the wisest teachers to press upon their pupils the importance of ‘periods and habits of solitude’, habits that made ‘serious and abstracted thought’ possible.

In the 20th century, the idea of solitude formed the centre of Hannah Arendt’s thought. A German-Jewish émigré who fled Nazism and found refuge in the United States, Arendt spent much of her life studying the relationship between the individual and the polis. For her, freedom was tethered to both the private sphere – the vita contemplativa – and the public, political sphere – the vita activa. She understood that freedom entailed more than the human capacity to act spontaneously and creatively in public. It also entailed the capacity to think and to judge in private, where solitude empowers the individual to contemplate her actions and develop her conscience, to escape the cacophony of the crowd – to finally hear herself think.

In 1961, The New Yorker commissioned Arendt to cover the trial of Adolf Eichmann, a Nazi SS officer who helped to orchestrate the Holocaust. How could anyone, she wanted to know, perpetrate such evil? Surely only a wicked sociopath could participate in the Shoah. But Arendt was surprised by Eichmann’s lack of imagination, his consummate conventionality. She argued that while Eichmann’s actions were evil, Eichmann himself – the person – ‘was quite ordinary, commonplace, and neither demonic nor monstrous. There was no sign in him of firm ideological convictions.’ She attributed his immorality – his capacity, even his eagerness, to commit crimes – to his ‘thoughtlessness’. It was his inability to stop and think that permitted Eichmann to participate in mass murder.

Just as Poe suspected that something sinister lurked deep within the man of the crowd, Arendt recognised that: ‘A person who does not know that silent intercourse (in which we examine what we say and what we do) will not mind contradicting himself, and this means he will never be either able or willing to account for what he says or does; nor will he mind committing any crime, since he can count on its being forgotten the next moment.’ Eichmann had shunned Socratic self-reflection. He had failed to return home to himself, to a state of solitude. He had discarded the vita contemplativa, and thus he had failed to embark upon the essential question-and-answering process that would have allowed him to examine the meaning of things, to distinguish between fact and fiction, truth and falsehood, good and evil.

‘It is better to suffer wrong than to do wrong,’ Arendt wrote, ‘because you can remain the friend of the sufferer; who would want to be the friend of and have to live together with a murderer? Not even another murderer.’ It is not that unthinking men are monsters, that the sad sleepwalkers of the world would sooner commit murder than face themselves in solitude. What Eichmann showed Arendt was that society could function freely and democratically only if it were made up of individuals engaged in the thinking activity – an activity that required solitude. Arendt believed that ‘living together with others begins with living together with oneself’.

But what if, we might ask, we become lonely in our solitude? Isn’t there some danger that we will become isolated individuals, cut off from the pleasures of friendship? Philosophers have long made a careful, and important, distinction between solitude and loneliness. In The Republic (c380 BCE), Plato proffered a parable in which Socrates celebrates the solitary philosopher. In the allegory of the cave, the philosopher escapes from the darkness of an underground den – and from the company of other humans – into the sunlight of contemplative thought. Alone but not lonely, the philosopher becomes attuned to her inner self and the world. In solitude, the soundless dialogue ‘which the soul holds with herself’ finally becomes audible.

Echoing Plato, Arendt observed: ‘Thinking, existentially speaking, is a solitary but not a lonely business; solitude is that human situation in which I keep myself company. Loneliness comes about … when I am one and without company’ but desire it and cannot find it. In solitude, Arendt never longed for companionship or craved camaraderie because she was never truly alone. Her inner self was a friend with whom she could carry on a conversation, that silent voice who posed the vital Socratic question: ‘What do you mean when you say …?’ The self, Arendt declared, ‘is the only one from whom you can never get away – except by ceasing to think.’

Arendt’s warning is well worth remembering in our own time. In our hyper-connected world, a world in which we can communicate constantly and instantly over the internet, we rarely remember to carve out spaces for solitary contemplation. We check our email hundreds of times per day; we shoot off thousands of text messages per month; we obsessively thumb through Twitter, Facebook and Instagram, aching to connect at all hours with close and casual acquaintances alike. We search for friends of friends, ex-lovers, people we barely know, people we have no business knowing. We crave constant companionship.

But, Arendt reminds us, if we lose our capacity for solitude, our ability to be alone with ourselves, then we lose our very ability to think. We risk getting caught up in the crowd. We risk being ‘swept away’, as she put it, ‘by what everybody else does and believes in’ – no longer able, in the cage of thoughtless conformity, to distinguish ‘right from wrong, beautiful from ugly’. Solitude is not only a state of mind essential to the development of an individual’s consciousness – and conscience – but also a practice that prepares one for participation in social and political life. Before we can keep company with others, we must learn to keep company with ourselves.Aeon counter – do not remove

~ Jennifer Stitt is a graduate student in the history of philosophy at the University of Wisconsin-Madison. Bio credit: Aeon

This article was originally published at Aeon and has been republished under Creative Commons.

Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and ad-free, supported by patrons and readers like you. Please offer your support today!

Happy Birthday, Grace Lee Boggs! Bio and Book Review by Ashley Farmer

Grace Lee Boggs, By Kyle McDonaldm creativecommons.orglicensesby2.0, via Wikimedia Commons, cropped“The Power And Importance Of Ideas:” Grace Lee Boggs’s Revolutionary Vision”

In the opening lines of her autobiography, Living for Change, Grace Lee Boggs remarked: “Had I not been born female and Chinese American, I would not have realized from early on that fundamental changes were necessary in our society.”[1] A daughter of Chinese immigrants born in 1915, who, by her account, became a philosopher in her 20s and an activist in her 30s, Boggs remains one of the greatest radical theorists of the twentieth century.

Born in Rhode Island, Boggs spent her childhood in New York City, working in the two restaurants her father owned in Times Square. At the age of 16, she left home to attend Barnard College, and afterward, Bryn Mawr, where she earned a PhD in Philosophy in 1940. Philosophers like Hegel helped her “see [her] own struggle for meaning as part of the continuing struggle of the individual to become part of the universal struggle for Freedom.”[2] Boggs moved to Chicago in 1940. She began working with the South Side Tenants Organization set up by the Workers Party, a Trotskyist group that had split off from the Socialist Workers Party. Her time in the Windy City proved transformative. For the first time she was talking and working with the black community, getting a first-hand sense of what it meant to live within the confines of segregation and discrimination, and learning how to participate in grassroots organizing.[3]

It was also during her tenure with the Workers Party that she met Caribbean radical C.L.R. James, and began a “theoretical and practical collaboration that would last twenty years.”[4] As part of a small wing of the workers Party led by James and Raya Dunayevskaya, Boggs became a leading theoretician, co-authoring texts like State Capitalism and World Revolution (1950). Through James, she came into contact with a number of black writers and activists who expanded her perspective. She relocated to Detroit in 1953, where she would organize with, and marry, James (Jimmy) Boggs.

During the 1950s, Boggs, “mainly listened and learned” to the black activists around her in an effort to better understand the black condition. It would take several years before she decided that she had been “living in the black community long enough to play an active role in the Black Power Movement that was emerging organically in a Detroit where blacks were becoming the majority.”[5] Living and working in what was considered to be an epicenter of black radicalism, Boggs engaged in a combination of theorizing and protesting, authoring texts with James Boggs, meeting and organizing with Malcolm X, and mentoring young radicals like Muhammad Ahmad (Max Stanford), leader of the Revolutionary Action Movement (RAM).

Her liberation theory was grounded in her study of philosophy and honed through her experiences organizing with and for black communities. It was also constantly evolving. Boggs emphasized dialectical thinking, arguing that reality is ever changing and that we must “constantly be aware of the new and more challenging contradictions that drive change.”[6] This reciprocal process drove her expansive vision of revolution. In her final book, The Next American Revolution, she explained her latest concept of revolution:

The next American Revolution, at this stage in our history, is not principally about jobs or health insurance or making it possible for more people to realize the American Dream of upward mobility. It is about acknowledging that we as Americans have enjoyed middle-class comforts at the expense of other peoples all over the world. It is about living the kind of lives that will not only slow down global warming but also end the galloping inequality both inside this country and between the Global North and Global South. It is about creating a new American Dream whose goal is a higher Humanity instead of the higher standard of living dependent on Empire.[7]

Boggs consistently offered a holistic vision of revolution and concrete steps through which to build it. She argued that achieving this goal meant more than organizing or mobilizing to petition the state or “changing the color of political power,” but rather growing food, reinventing education, developing Peace Zones in local neighborhoods, and creating restorative justice programs. She saw the seeds of revolution everywhere and showed us how, by practicing dialectical thinking, breaking down divides and categories, and building on rather than replicating older political models, we might “grow our souls.” She mirrored this in her own life, constantly “combining activity and reflection.”[8] Her willingness to do the work, her ability to listen and learn from black activists, her commitment to living in the communities in which she organized, and her openness to revising her politics, and values, made her an effective life-long ally of the black community and theoretician of liberation and revolution.

As she noted, often, “in the excitement of an emerging movement, we tend to want to be part of the action, and we underestimate the power and importance of the ideas in our heads and hearts.”[9] Upon her death, it’s important to revisit the ideas in her head. She left us a roadmap for revolution through ideas and action, one that anyone could be a part of if they were clear about the stakes of the transformation and that fundamental change is necessary.

Originally published at the African American Intellectual History Society blog, republished under Creative Commons

~ Ashley Farmer is an historian of African-American women’s history. Her research interests include women’s history, gender history, radical politics, intellectual history, and black feminism. She earned a BA in French from Spelman College, an MA in History from Harvard University, and a PhD in African American Studies from Harvard University. She is currently a Provost Postdoctoral Fellow in the History Department at Duke University. In August 2016, she will be an Assistant Professor in the Department of History and the African American Studies Program at Boston University. This bio and more about Ms. Farmer are to be found at her personal website

Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and ad-free, supported by patrons and readers like you. Please offer your support today!

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

[1] Grace Lee Boggs, Living for Change: An Autobiography (Minneapolis: University of Minnesota Press, 1998), xi.

[2] Ibid., 30-31.

[3] Ibid., 36.

[4] Ibid., 43. James and Boggs “went their separate ways in 1962.”

[5] Grace Lee Boggs with Scott Kurashige, The Next American Revolution: Sustainable Activism for the Twenty-first Century (Berkeley: University of California Press, 2011), 66.

[6] Ibid., 62.

[7] Ibid., 72.

[8] Ibid., 164.

[9] Ibid., 80.

Happy Birthday, W.V.O. Quine!

WVO Quine on the Bluenose II in Halifax, Nova Scotia, photo courtesy of Douglas Quine (cropped)

WVO Quine on the Bluenose II in Halifax, Nova Scotia, photo courtesy of Douglas Quine

The emphases in my own education in philosophy were Ethics, Politics, and Law, so I didn’t spend as much time studying Willard Van Orman Quine’s great contributions to philosophy as I would like. However, if my focus was Mathematical Logic, Epistemology, Philosophy of Language, or Philosophy of Science, I would have spent a lot of time with the prodigious output of his remarkable intelligence. But one of his important observations is brought up in introductory philosophy classes generally, an epistemological (having to do with knowledge) quandary: Given that science continuously makes new discoveries, sometimes in the process overturning and replacing earlier theories, how can we ever say that we actually know anything about the world? Science relies on the fact that all theories are subject to revision, expansion, and being proved wrong. Does this mean, then, there’s no such thing as knowledge, since, in theory, anything we claim to know may be disproved by later discoveries?

For Quine, there is no dividing line between science and philosophy; they are interconnected ways of discovering and understanding the world. As the Stanford Encyclopedia of Philosophy puts it, Quine ‘denies that there is a distinctively philosophical standpoint, which might, for example, allow philosophical reflection to prescribe standards to science as a whole. He holds that all of our attempts at knowledge are subject to those standards of evidence and justification which are most explicitly displayed, and most successfully implemented, in the natural sciences. This applies to philosophy as well as to other branches of knowledge.’ The Internet Encyclopedia of Philosophy says further, ‘…Quine often appeals to [Otto] Neurath’s metaphor of science as a boat, where changes need to be made piece by piece while we stay afloat, and not when docked at port. He further emphasizes that both the philosopher and scientist are in the same boat (1960, 3; 1981, 72, 178). The Quinean philosopher then begins from within the ongoing system of knowledge provided by science, and proceeds to use science in order to understand science. …his use of the term “science” applies quite broadly referring not simply to the ‘hard’ or natural sciences, but also including psychology, economics, sociology, and even history (Quine 1995, 19; also see Quine 1997). But a more substantive reason centers on his view that all knowledge strives to provide a true understanding of the world and is then responsive to observation as the ultimate test of its claims…’

Oh, and he played the mandolin and piano, and learned a lot of languages just so he could deliver his lectures in the native language of the audience. Whatta guy!

Learn more about the great W.V.O. Quine:

W. V. Quine, Philosopher Who Analyzed Language and Reality, Dies at 92 – by Christopher Lehmann-Haupt for The New York Times, Dec 29, 2000

Willard Van Orman Quine – by Peter Hylton for The Stanford Encyclopedia of Philosophy

Willard Van Orman Quine: Philosophy of Science – by Robert Sinclair for The Internet Encyclopedia of Philosophy

Willard Van Orman Quine, 1908-2000: Philosopher and Mathematician – Website by Douglas B. Quine, W.V.O. Quine’s son

Willard Van Orman Quine – by Luke Mastin for The Basics of Philosophy: A huge subject broken down into manageable chunks

Willard Van Orman Quine – In Wikipedia, The Free Encyclopedia.

Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and ad-free, supported by patrons and readers like you. Please offer your support today!

Happy Birthday, Blaise Pascal!

Blaise Pascal, crayon drawing by Jean Domat, c. 1649, in the Bibliothèque Nationale, Paris

Blaise Pascal, born on June 19th, 1623 in Auvergne, France, was a mathematician, philosopher, physicist, scientist, theologian, inventor, and writer.

This polymath was so talented in so many areas that any one of them could have kept his memory and influence alive to this day. Steven West writes in Philosophize This that we could ‘feel completely inadequate when we learn that he invented the calculator (yes, the calculator) at age 18.’ David Simpson writes in The Internet Encyclopedia of Philosophy, ‘In mathematics, he was an early pioneer in the fields of game theory and probability theory. In philosophy, he was an early pioneer in existentialism. As a writer on theology and religion, he was a defender of Christianity.’ Jean Orcibal and Lucien Jerphagnon write for Encyclopædia Britannica, ‘He laid the foundation for the modern theory of probabilities, formulated what came to be known as Pascal’s principle of pressure, and propagated a religious doctrine that taught the experience of God through the heart rather than through reason.’

Pascal’s Wager is probably his best-known idea, keeping his name alive in popular culture as well as among scholars. The argument can be summarized thusly: You can believe in God, or not believe in God. When it comes to the effect of the state of your belief on your possible eternal afterlife, if you don’t believe in God, you may very well be damned for all eternity. But if you do believe in God, you may achieve salvation. When it comes to the effect of your state of belief on your life here on Earth, whether or not you believe in God, your life will not be affected hugely. We’re all constrained as a matter of course by cultural expectations and codes of behavior, after all. The religious constraints we might find inconvenient and even tedious sometimes don’t, generally, significantly burden your life more than any others, while the practice of religion can add a great deal of meaning and satisfaction to life. Since we stand to gain much more than we might lose, all in all, it’s the most logical and therefore best bet to believe in God.

Now Pascal is an extremely intelligent man, and he knows belief is something you can’t just flip on like a light switch. That’s why he advises that the prudent person will choose to believe in God for the reasons described above and then behave as if they believed it. With enough acts of piety, religious study, and time among other virtuous and true believers, they are bound to end up believers themselves. This is very insightful psychologically: it accords well with what we now know about how the brain works, and his ‘fake it till you make it’ belief formation process is very like modern cognitive behavior therapy. Enact the change you wish to see in your mind, and your mind will follow, or to use modern terminology in common use, your brain will be ‘rewired’.

I’ve heard many people object to Pascal’s Wager on the grounds that they think religion does have too many negative effects to accept that the wager in favor of God-belief is a good bet. To ignore what your reason tells you about how unlikely it is that anything exists outside of the natural world, and that contradictions within and among the scriptures of the world indicate that none of them are divinely inspired, and so on, makes you a traitor to reason and critical thinking and science. It undermines your ability to perceive and understand the real world on its own terms. And betraying your own powers of reason, so that you can feel safe about an afterlife that no-one can demonstrate happens anyway, infantilizes you by subjugating your critical thinking to your superstitious fears.

I don’t buy the first objection anymore, though I once found it convincing. After all, few are better at reasoning and critical thinking than Pascal. His formidable powers of reason don’t appear undermined in any serious way overall considering his incredible lifetime achievements in mathematics, physics, logic, practical invention, and science. Choosing to believe in God clearly doesn’t seem to hamper his intellect one bit.

I still object to the Wager, but on these grounds: I think Pascal, unjustifiably, assumes too much when constructing his argument in the first place. Why, for example, bet on the idea that God would even be pleased by and liable to reward belief in him, even if eventually sincere, when it originates in this sort of self-serving calculation? Why not assume instead that God, if he exists, would reward honesty itself, whether in believers or nonbelievers, so long as their state of belief results from good faith efforts to seek truth? This seems, to me, more in line with the inclinations of the creator of a rational, ordered universe, the ultimate expression of reason, which in turn requires fearless, honest inquiry if it’s to be known, understood, and appreciated in the fullest way possible.

But this wager is just one relatively minor result of Pascal’s exploration of this fascinating world, and given his pioneering inquiries in the areas that would later be known as probability theory and game theory, it’s not surprising that, in brainstorming, he came up with this possible solution to the problems of belief vs. reason. And whether or not he got it right, it’s long captured the public imagination and really does make us think, as he’s done exquisitely during, and throughout the centuries after, his all-too-short life.

Learn more about the great Blaise Pascal:

Blaise Pascal – by Desmond Clarke for The Stanford Encyclopedia of Philosophy

Blaise Pascal (1623–1662) – by David Simpson for the Internet Encyclopedia of Philosophy 

Blaise Pascal: French Philosopher and Scientist – by Jean Orcibal and Lucien Jerphagnon for Encyclopædia Britannica

Pascal’s Wager and – +EV your way to success!! – by Steven West, Philosophize This!

Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and ad-free, supported by patrons and readers like you. Please offer your support today!

Happy Birthday, Adam Smith!

Adam Smith statue on the Royal Mile in Edinburgh, Scotland

Adam Smith was a philosophical disciple and life-long friend of David Hume, and as such, I encountered his ideas regularly while I was following the life and ideas of Hume a few years ago in Edinburgh. Smith wrote a moving account of Hume’s last days.

Smith was baptized and perhaps born on June 5th, 1723 in Kirkcaldy (a fishing village near Edinburgh) and died on July 17, 1790 in Edinburgh, Scotland. He attended university at Glasgow and Oxford, and found the former intellectual milieu more stimulating by orders of magnitude. Glasgow and Edinburgh were vigorous centers of Enlightenment thought in philosophy, natural philosophy (as the sciences were then known), linguistics, history, political theory, mathematics, and more. David Hume, Adam Smith, and their fellow leaders in the Scottish Enlightenment joined the ranks of this philosophical tradition’s greatest and most influential thinkers.

Like pretty much all Americans interested in basic economic theory, I’d heard a lot about The Wealth of Nations, Smith’s treatise on political economy. You likely have as well, since here you are reading a birthday tribute to Adam Smith! The Wealth of Nations is considered the foundational theoretical work on capitalism and therefore, Smith is regarded as a key figure in economic theory. But when I returned to university a few years ago to study philosophy, and when researching the life and ideas of Hume and his contemporaries for my aforementioned project, I spent more time with Smith’s moral philosophy. So I’ll focus this aspect of his thinking here. After all, this was his main arena of inquiry: he was not an economist, but a professor of moral philosophy at Glasgow. His Theory of Moral Sentiments was, and still is to a lesser consent, respected as a major work in moral philosophy. And, I think there are enough people promoting his Wealth of Nations as, like, the best thing ever; you can find plenty to read about that on the internet.

Portrait medallion of Adam Smith by James Tassie at the National Portrait Gallery in Edinburgh, Scotland.

Smith’s Theory of Moral Sentiments merges from a sort of compendium of elements of moral philosophy, in which Smith fuses what he considers the best and most coherent elements of moral philosophy into one compelling system. In it, one recognizes Humian sentimentalism, Kantian-type reason-based morality (Immanuel Kant’s work on this topic came after Smith’s, though the men were direct contemporaries), consequentialism, and Aristotelian virtue ethics. Like Hume, Smith thinks that the emotions play a central role. Before Hume, morality was widely considered to be primarily a matter of reason, and morality required us to quash our emotions, or as Hume put it, passions, because human are naturally and by default selfish, greedy, profane, lazy, and in myriad others way fallen creatures. Hume, however, does not agree. He believes that human beings naturally identify with the pains and joys of others, internalizing them and causing us to want to ameliorate their circumstances, and it’s this direct emotional response that drives the moral sense. Smith largely agrees, but not wholly. He also stresses the importance of sympathy (close to the sense that we’d usually now mean empathy) in making moral judgments. Smith explains that the moral agent is like an impartial spectator who participates in the daily lives, sufferings, and joys of our fellow human beings through our emotional response to their situation.

Adam Smith portrait by John Kay from 1790 (the year of Smith’s death), at the National Portrait Gallery, Edinburgh

But Smith also believes that sympathy (empathy) is not enough: our sympathies can and should be corrected by reason since our emotional responses can become inappropriate to the situation, corrupted by ignoble impulses such as greed, ambition, selfishness, and so on. An impartial, uncorrupted spectator would not consider indifference or cruelty, for example, as proper emotional responses to the plight of others. (I see shades of John Rawl’s ‘veil of ignorance‘ here.) One way to help us maintain moral ‘propriety’, as Smith put it, is to apply reason, and one way our reason can help us judge whether our moral sentiments are correct is to consider the consequence of actions we feel inclined to do. While the consequences of our actions don’t determine their rightness or wrongness as they do in consequentialist moral theories, they are an important consideration and in some cases, such as those in which human life hangs in the balance, they should take precedence. And finally, Smith agrees with Aristotle that we can’t rely on a pre-determined, reason-derived, emotionally-detached set of inflexible moral principles to differentiate right from wrong, good from bad, as Kant would have it. Rather, we naturally recognize and respond to virtue when we see it. We admire its beauty and goodness and have the desire to emulate it. Aristotle sees virtue as a perfect balance between opposing qualities in the same sphere: courage is the virtue on the right part of the spectrum between cowardliness and recklessness; temperance between licentiousness and insensibility; friendliness between obsequiousness and cold indifference. Smith likewise stresses the importance of balance in our moral character but focuses more on attuning our sympathies so they are in propriety, thereby driving us to act in the kindest, most honest, and fairest way towards one another as a matter of course.

This is only a very short summary of Smith’s moral philosophy by one who is by no means an expert. To learn more about the great philosopher and economist Adam Smith from those who are (including himelf, he’s an excellent and compelling writer), and for more about the philosophical traditions that influenced him and which he influenced in turn, see:

Adam Smith (1723—1790) – Jack Russell Weinstein for the Internet Encyclopedia of Philosophy

Adam Smith’s Moral and Political Philosophy – by Samuel Fleischacker for The Stanford Encyclopedia of Philosophy

Adam Smith pt. 1 – Specialization and Adam Smith pt. 2 – The Tip of the Iceberg Of Wealth – Stephen West discusses Adam Smith’s political economy for his blog Philosophize This!

Adam Smith on What Human Beings Are Like – Nicholas Phillipson discusses Adam Smith’s view of human beings with Nigel Warburton for Philosophy Bites podcast

Enlightenment – William Bristow for The Stanford Encyclopedia of Philosophy

Moral Sentimentalism – Antti Kauppinen for The Stanford Encyclopedia of Philosophy

The Problem With Inequality, According to Adam Smith – Dennis C. Rasmussen, Jun 9, 2016 for The Atlantic

The Theory of Moral Sentiments – Adam Smith, first published in 1759

~ Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and ad-free, supported by patrons and readers like you. Please offer your support today!

Why Shouldn’t We Compel Them to Come In? Locke, the Enlightenment, and the Debate over Religious Toleration, by Nicholas Jolley

Religious Liberty, at the National Museum of American Jewish History in Philadelphia, by Moses Jacob Ezekiel, 1876

Most people in the West today unreflectively accept the need for religious toleration. Of course, if pressed, they will admit that toleration, like freedom of speech, can’t be absolute; there must be some limits. Suppose, for example, that my religion calls for human sacrifice every Sunday; no one will think that such a religion should be tolerated. Again, if pressed, people will agree that there are difficult cases: to take an issue that troubled John Locke, suppose that my religion demands allegiance to a foreign power. We may think that reasonable people can disagree over such cases. But the fact that there are these problem cases doesn’t shake people’s commitment to the principle of religious toleration.

We tend to be so wedded to this principle that we can easily forget how seductive the case for intolerance can be. Consider, for instance, a person who says with an authoritative air: “I know that my religion is the true one and that yours is completely false. I also know you will go to hell if you don’t convert to my religion.” Wouldn’t it be an act of charity on his part to convert you, by force if necessary, to the religion that will ensure your happiness in the afterlife? Here one might adapt an example given by that champion of liberalism, John Stuart Mill, for another purpose. A police officer sees a person trying to cross a bridge that he knows to be unsafe. According to Mill, it’s not an unwarranted interference with the person’s liberty for the officer to use force to prevent him or her from stepping on to the bridge; he knows, after all, that the bridge is unsafe and he knows that the person doesn’t want to fall into the river. One might take a similar line in the religious case: I know that John’s religion is leading him to hell, and I know that that’s not where he wishes to end up. Theologians in the Western tradition such as Augustine have argued for intolerance along these lines, and they have buttressed their argument by appealing to the biblical text: “Compel them to come in.”

Modern liberals are likely to respond that the appeal to Mill’s example is unfair, for the analogy is far from exact. For one thing, Mill builds into his example the assumption that there is no time to warn the person about the danger of the bridge; presumably, if there were time to warn him, then other things being equal, Mill would admit that there was no case for coercion. More importantly, one might argue that no one really knows, or can know, that the doctrines of revealed religion are true; acceptance of such doctrines depends on accepting the accounts of witnesses who may be unreliable or whose words may have been misinterpreted down the ages.

The idea that no one can know the claims of revealed religion are true is the basis for one of Locke’s main strategies of argument for religious toleration. The strategy is a powerful one, but it is open to a couple of objections. First, Locke sets the bar for knowledge very high: he allows little to count as knowledge that isn’t on a par with mathematical demonstration. By his lights, in the bridge example, even the policeman doesn’t strictly know that the bridge is unsafe. Further, even if the champion of intolerance concedes that he doesn’t strictly know his religion to be true, he may still say that he has very strong support for his beliefs, and that this level of support justifies him in coercing others. So the kind of case that Locke makes here may not be conclusive.

Fortunately, Locke has other strings to his bow. One intriguing argument turns on the nature of belief and its relation to the will. Suppose that the champion of intolerance says to the unbeliever: “You ought to believe the articles of my faith” (e.g. the doctrine of the Trinity). It seems apt for the unbeliever to reply to such a claim by saying: “It’s not in my power to believe this doctrine. You misunderstand the nature of belief. Belief is not a voluntary action like switching on a light. Rather, belief is more like falling in love; it’s something that happens to you.” One might then plug in the Kantian principle implicitly accepted by Locke: ought implies can. If belief is not in my power, and ought implies can, then I can have no obligation to believe the proposition in question.

This can seem like a powerful reply to the advocate of intolerance, but again, unfortunately, it’s not conclusive. For the advocate may say: “I agree that belief is not directly under your voluntary control, but I maintain that it is indirectly so. True, you can’t just switch on belief, but it’s in your power to do things that will result, or are likely to result, in your coming to believe.” Pascal, for instance, thought that though we can’t just believe at will, we can do things such as going to Mass and mixing with the congregation of the faithful that will have the effect of producing belief; faith, he thought, is catching. And then the intolerant person is in a position to make a case for religious persecution on the part of the state: there should be penalties for non-attendance at church so that people are induced to attend and at least to give a hearing to the teachings of the state-approved religion. This was the argument put to Locke by his opponent, Jonas Proast. Locke seeks to reply to this argument by saying that sincere religious belief can’t be produced in this way, and that it’s only sincere religious belief that is acceptable to God. Whether this reply to Proast is successful is a controversial issue among philosophers who have studied the debate. And the issue isn’t a narrowly academic one: it should be of interest to all those who seek to defend the values of the Enlightenment today.

This essay was originally published at OUP Blog: Oxford University Press’s Academic Insights for the Thinking World

~ Nicholas Jolley is Research Professor and Emeritus Professor of Philosophy at the University of California, Irvine. He has also taught at the University of California, San Diego, and Syracuse University. He is the author of a number of books for OUP, including Toleration and Understanding in Locke (2017), and Locke’s Touchy Subjects: Materialism and Immortality (2015).

~ Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and ad-free, supported by patrons and readers like you. Please offer your support today!