O.P. Recommends: Is a Universal Basic Income too Utopian to Work?

The Moneylender and his Wife by Quinten Massijs (detail), public domain via Wikimedia Commons

I recently listened to Jack Russell Weinstein’s interview of historian and author Rutger Bregman with a great deal of interest, and the discussion is so rich in detail I plan on listening to it again soon. The interview, available as a podcast, explores the question “Is a Universal Basic Income too Utopian to Work?” As you may know, I’m very interested in the topic of basic income, in the philosophical and in the practical justifications for providing at least a minimum living to everyone, regardless of perceived merit. I agree with Weinstein that Bregman makes a very convincing case that a basic income is not only economically feasible; it’s practical, it’s just, and it’s the right thing to do. I very much encourage you to listen, I think you’ll learn some very surprising things!

Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and ad-free, supported by patrons and readers like you. Please offer your support today!

The Revolutionary Figure of the Beautiful, Self-Improved Soul, by Justine Kolata

Miniature room by Mrs. James Ward Thorne portraying a French salon from about 1780, ca. 1930’s, Art Institute of Chicago

In a global culture that appears increasingly obsessed with radical individualism, narcissistic presentations of self, and incendiary political rhetoric, it is hard to imagine that society once cared about the beauty of the soul. But, in the late 18th and early 19th centuries in Germany and across Europe, the pursuit of a ‘beautiful soul’ became a cornerstone of philosophical thought and popular discourse, advanced by some of the most important intellectuals of the time, including Johann Wolfgang von Goethe, Friedrich Schiller and Wilhelm von Humboldt. To these thinkers, the pursuit of inner perfectibility responded to the horrors of the French Revolution’s irrational mass action culminating in The Terror of the 1790s. Nascent notions of democracy, they believed, could be developed only if each individual achieved liberation from what Immanuel Kant described as the ‘self-incurred tutelage’ of intellectual immaturity by developing cognitive and emotional faculties through aesthetic experiences.

At the core of the beautiful soul is the idea that the individual possesses an innate cognitive potential. Subject to the right environmental and educational conditions, this latent potential can be developed to reach a more perfect state of intellect, morality, character and conduct. The beautiful soul is an aesthetic concept focused on developing human capacities and advancing knowledge and culture. It entails the pursuit of personal cultivation to create a convergence of the individual aesthetic impulse with a collective ethical ideal. The beautiful soul is a virtuous soul, one that possesses a sense of justice, pursues wisdom, and practises benevolence through an aestheticised proclivity for the ‘good’.

Inspired by ancient Greek philosophy, the beautiful soul reflects Plotinus’ imperative to cultivate the self in the same way that the sculptor works:

Withdraw within yourself, and examine yourself. If you do not yet therein discover beauty, do as the artist, who cuts off, polishes, purifies until he has adorned his statue with all the marks of beauty. Remove from your soul, therefore, all that is superfluous, straighten out all that is crooked, purify and illuminate what is obscure, and do not cease perfecting your statue until the divine resplendence of virtue shines forth upon your sight …

Sculpting the soul and creating what Goethe referred to as ‘a more beautiful humanity’ is achieved through the internalisation of the Platonic triad of beauty, truth and goodness. Beauty is conceived as the integration of intellectual and aesthetic faculties in the encounter with art and nature. Truth is the result of the logical exercise of rational faculties and the elevating sense of curiosity derived from experiences in the world. Goodness is found in the human capacity to feel compassion for others and thereby contribute to the betterment of society.

The Platonic triad is realised within the soul by exploring ideas through lived experiences, not by blindly following abstract principles or dogma dictated by a church or political system. The concept requires that the individual actively engage her senses to navigate the material world in which beauty acts as her guide. The ineluctable indeterminateness of aesthetic, sensory experience is precisely what makes it valuable in expanding one’s consciousness in order to explore the ultimate questions of reality. Watching a lark’s parabolic trajectory in the sky, observing the fractal patterns found in nature, contemplating the concentric circles produced by rain droplets in pools of water become opportunities to understand the universe and reach a heightened cognitive-affective state. As Goethe observed: ‘A man should hear a little music, read a little poetry, and see a fine picture every day of his life in order that worldly cares may not obliterate the sense of the beautiful which God has implanted in the human soul.’

The concept affirms that, in its universality, beauty offers a means of engaging with the world, providing a common basis upon which positive social relationships can be developed, acting as a lexicon for communicative exchange. Since it is a natural human inclination to share sensory experiences, beauty provides an opportunity to bond individuals in a moment of ultimate meaning, conveying ineffable feelings that cut to the core of existence. By opening one’s perceptual horizons, a person is elevated beyond ego and self-absorption into a realm of universal concern and contemplation. Beauty achieves the good by strengthening faculties of empathy that induce deeper compassion for others and attentiveness to the wellbeing of the social collective. Thus, the marriage of the beautiful, the true and the good is for the beautiful soul more than the metaphysical meditations of antiquity but the very basis of a more just and equitable society.

Although the philosophy was never realised in the way that its theorists envisioned, the beautiful soul is far more than a beautiful idea. In turning towards aesthetics, the philosophers of the German Aufklärung (Enlightenment) did not naively evade political realities. Instead, they offered a holistic theory that recognised the long-term horizon for the flourishing of reason and human understanding. In doing so, they developed a poetic conception of politics that took inspiration from ancient Greek notions of an aesthetic state. In working towards her own self-improvement and fearlessly venturing into society, the beautiful soul was a revolutionary figure, at the vanguard of Enlightenment progress.

Self-cultivation was not an idle, vainglorious pursuit of the wealthy, but rather a radical reformulation of what it meant to be human and how to harmoniously exist in society. The beautiful soul anticipated the problems of instrumental reason, overcoming the dangers of mere utility, disenchantment and social isolation by offering an aesthetic world view that facilitated positive human interactions and a multidimensional understanding of human experience. She epitomised Enlightenment values of equality, fraternity and rationality, serving as the model of a citizen who lived up to the responsibilities associated with democracy.

The contemporary turn towards nihilism that lionises the individual at the expense of the collective has made the idea of cultivating a more beautiful soul appear hopelessly idealistic and disconnected from ‘hard realities’. In a realist’s world, we seek utilitarian ends under the guise of pragmatism, turning away from the illusiveness of an immaterial and ultimately unattainable ideal. The mystery and poetry of human nature has been stripped from our daily experience at the expense of our imaginations and our will to envision a more beautiful world. Yet, the social and environmental ills induced by our unfettered economy of instrumentality are proving anything but pragmatic for the long-term sustainability and wellbeing of our species. If we still harbour hope in the human propensity for goodness, then we ought to contemplate anew the poetic, revolutionary figure of the beautiful soul that might once again provide a vision for deepening our intellectual, moral and emotional faculties in the service of a more just and progressive future for us all.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

Justine Kolata is the founder and director of The Public Sphere, and the co-founder and co-director of The Bildung Institute. She is currently pursuing a PhD in the German department at the University of Cambridge on enlightenment salon culture.

Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and ad-free, supported by patrons and readers like you. Please offer your support today!

Before You Can Be With Others, First Learn to Be Alone, by Jennifer Stitt

In 1840, Edgar Allan Poe described the ‘mad energy’ of an ageing man who roved the streets of London from dusk till dawn. His excruciating despair could be temporarily relieved only by immersing himself in a tumultuous throng of city-dwellers. ‘He refuses to be alone,’ Poe wrote. He ‘is the type and the genius of deep crime … He is the man of the crowd.’

Like many poets and philosophers through the ages, Poe stressed the significance of solitude. It was ‘such a great misfortune’, he thought, to lose the capacity to be alone with oneself, to get caught up in the crowd, to surrender one’s singularity to mind-numbing conformity. Two decades later, the idea of solitude captured Ralph Waldo Emerson’s imagination in a slightly different way: quoting Pythagoras, he wrote: ‘In the morning, – solitude; … that nature may speak to the imagination, as she does never in company.’ Emerson encouraged the wisest teachers to press upon their pupils the importance of ‘periods and habits of solitude’, habits that made ‘serious and abstracted thought’ possible.

In the 20th century, the idea of solitude formed the centre of Hannah Arendt’s thought. A German-Jewish émigré who fled Nazism and found refuge in the United States, Arendt spent much of her life studying the relationship between the individual and the polis. For her, freedom was tethered to both the private sphere – the vita contemplativa – and the public, political sphere – the vita activa. She understood that freedom entailed more than the human capacity to act spontaneously and creatively in public. It also entailed the capacity to think and to judge in private, where solitude empowers the individual to contemplate her actions and develop her conscience, to escape the cacophony of the crowd – to finally hear herself think.

In 1961, The New Yorker commissioned Arendt to cover the trial of Adolf Eichmann, a Nazi SS officer who helped to orchestrate the Holocaust. How could anyone, she wanted to know, perpetrate such evil? Surely only a wicked sociopath could participate in the Shoah. But Arendt was surprised by Eichmann’s lack of imagination, his consummate conventionality. She argued that while Eichmann’s actions were evil, Eichmann himself – the person – ‘was quite ordinary, commonplace, and neither demonic nor monstrous. There was no sign in him of firm ideological convictions.’ She attributed his immorality – his capacity, even his eagerness, to commit crimes – to his ‘thoughtlessness’. It was his inability to stop and think that permitted Eichmann to participate in mass murder.

Just as Poe suspected that something sinister lurked deep within the man of the crowd, Arendt recognised that: ‘A person who does not know that silent intercourse (in which we examine what we say and what we do) will not mind contradicting himself, and this means he will never be either able or willing to account for what he says or does; nor will he mind committing any crime, since he can count on its being forgotten the next moment.’ Eichmann had shunned Socratic self-reflection. He had failed to return home to himself, to a state of solitude. He had discarded the vita contemplativa, and thus he had failed to embark upon the essential question-and-answering process that would have allowed him to examine the meaning of things, to distinguish between fact and fiction, truth and falsehood, good and evil.

‘It is better to suffer wrong than to do wrong,’ Arendt wrote, ‘because you can remain the friend of the sufferer; who would want to be the friend of and have to live together with a murderer? Not even another murderer.’ It is not that unthinking men are monsters, that the sad sleepwalkers of the world would sooner commit murder than face themselves in solitude. What Eichmann showed Arendt was that society could function freely and democratically only if it were made up of individuals engaged in the thinking activity – an activity that required solitude. Arendt believed that ‘living together with others begins with living together with oneself’.

But what if, we might ask, we become lonely in our solitude? Isn’t there some danger that we will become isolated individuals, cut off from the pleasures of friendship? Philosophers have long made a careful, and important, distinction between solitude and loneliness. In The Republic (c380 BCE), Plato proffered a parable in which Socrates celebrates the solitary philosopher. In the allegory of the cave, the philosopher escapes from the darkness of an underground den – and from the company of other humans – into the sunlight of contemplative thought. Alone but not lonely, the philosopher becomes attuned to her inner self and the world. In solitude, the soundless dialogue ‘which the soul holds with herself’ finally becomes audible.

Echoing Plato, Arendt observed: ‘Thinking, existentially speaking, is a solitary but not a lonely business; solitude is that human situation in which I keep myself company. Loneliness comes about … when I am one and without company’ but desire it and cannot find it. In solitude, Arendt never longed for companionship or craved camaraderie because she was never truly alone. Her inner self was a friend with whom she could carry on a conversation, that silent voice who posed the vital Socratic question: ‘What do you mean when you say …?’ The self, Arendt declared, ‘is the only one from whom you can never get away – except by ceasing to think.’

Arendt’s warning is well worth remembering in our own time. In our hyper-connected world, a world in which we can communicate constantly and instantly over the internet, we rarely remember to carve out spaces for solitary contemplation. We check our email hundreds of times per day; we shoot off thousands of text messages per month; we obsessively thumb through Twitter, Facebook and Instagram, aching to connect at all hours with close and casual acquaintances alike. We search for friends of friends, ex-lovers, people we barely know, people we have no business knowing. We crave constant companionship.

But, Arendt reminds us, if we lose our capacity for solitude, our ability to be alone with ourselves, then we lose our very ability to think. We risk getting caught up in the crowd. We risk being ‘swept away’, as she put it, ‘by what everybody else does and believes in’ – no longer able, in the cage of thoughtless conformity, to distinguish ‘right from wrong, beautiful from ugly’. Solitude is not only a state of mind essential to the development of an individual’s consciousness – and conscience – but also a practice that prepares one for participation in social and political life. Before we can keep company with others, we must learn to keep company with ourselves.Aeon counter – do not remove

~ Jennifer Stitt is a graduate student in the history of philosophy at the University of Wisconsin-Madison. Bio credit: Aeon

This article was originally published at Aeon and has been republished under Creative Commons.

Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and ad-free, supported by patrons and readers like you. Please offer your support today!

The Love of Possession is a Disease With Them

Lakota giveaway ceremony, photo origin unknown

In my recent readings in the history of the Lakota and other native peoples of America’s Great Plains, I’ve been struck by descriptions of their giveaway ceremonies. They remind me of another practice I had learned of before, and which I believe is more generally familiar: the potlatch, a related custom practiced by Native Americans of the Northwest. Potlatches generally came with strict expectations of giving the gifts away again promptly, and then some. These exchanges cemented power relations and were often aggressively competitive; they’re better understood as tactical, sociopolitical transactions rather than simple acts of generosity.

Lakota giveaway ceremonies, however, are much more altruistic in the sense that we commonly understand the term. The gifts are given freely with no expectation of payback; in fact, the resulting impoverishment itself is a badge of honor. That’s why I chose a quote by Sitting Bull, the great Hunkpapa Lakota chief, to introduce this essay. He once illustrated the contrast between Lakota and white attitudes towards property by telling how his poverty aroused the admiration of his people, rather than the disdain most white people feel toward such a state. To those who share Sitting Bull’s impression of the invaders of his homeland, the driving need to amass and own material goods can be a sign of spiritual poverty.

Today’s United States, like those nations most similar to her in culture and economy, is very much not characterized by that less-is-more spirit. This is nothing new. The United States and Canadian governments’ historical prohibitions on giveaway ceremonies in vanquished tribes indicate that Sitting Bull’s characterization of white culture describes something that’s been around for quite a while. These governments viewed giveaway ceremonies as a challenge to the enthusiasm for a market-driven type of productive cooperation they wished to instill in the nations they conquered. These and other Western societies (derived from Europe) had been centered around the production, acquisition, accumulation, and display of goods particularly since the industrial age. This is reflected in our values, our mores, our politics, our language, our cultural attitudes, the ways we celebrate holidays and major life events, and even, increasingly, our religious and spiritual practices.

The free market system, a new style of trade characterized by Adam Smith as that best for improving lives of all human beings most efficiently, has indeed instilled many good practices and attitudes. For example, we’re less likely to see other nations and cultures as enemies when we cultivate relationships as trading partners; we see the effects of this change in international relations and in the relative peacefulness of the modern world to those which practiced the old feudal and mercantilist systems. We also see that more people throughout the world now live longer, more comfortable lives than ever before, as the market incentivizes and drives innovation to respond more efficiently to demand. But there have always been downsides to free markets too. Think of the slave trade; trade wars; colonialism; invasion and confiscation of indigenous lands; the immiseration of working people in squalid industrial towns and dismal factories; price- and wage-fixing by trusts and monopolies; and vast inequalities in wealth and in chances of success are but a few examples. Such practices and inefficiencies are not merely excesses or abuses perpetrated by a few bad actors: they are regular and expected outcomes of a system whose purpose is to maximize profit and come out ahead of everyone else.

And now, we see that market values have pervaded all levels of our consciousness, our self-conception of who we are and how we should best inhabit our world. As philosopher Michael Sandel describes it, we have gone from having a market economy to being a market society. The way we live, think, and feel is pervaded by consumerism. We’ve become buyers and sellers to the extent that we have become products ourselves, marketed and commodified, valued in work and in life insofar as we present ourselves the right way, are seen in the right places, wear the right brands and styles, drive the right cars, and use the right products.

And this has led us to a new problem, one unimaginable to John Locke, Adam Smith, and others who developed the theories about property rights and the benefits of open markets that we take for granted today. Human societies were relatively small then, and the uninhabited regions and untapped resources of the world seemed vast, even endless by comparison. It’s very different today. The population of the world has grown so large, our technological ability to produce goods from raw materials so varied, efficient, and prolific, and our ingrained habits of making, amassing, and consuming voraciously is leading us to a crisis of mass waste, pollution, and climate change.

The pollution problem can be seen as the modern corollary of Thomas Malthus’ 1798 theory that human reproduction would inevitably outstrip food production, leading to mass impoverishment. Though Malthus’ ideas had long gone out of fashion with advancements in agricultural technology and the widespread use of birth control, he’s enjoying a bit of a comeback. However long technology can stave off many of the ill effects of exponential population growth, the earth’s habitable surface and ability to produce what we need to survive (let alone to live well) is finite nonetheless. This is also true of our atmosphere’s ability to absorb the off-gassing of our industries without changing our biosphere’s ability to sustain the life it gave rise to. Over the centuries and decades, concerns about human impact on the natural world and its life-sustaining resources swing from optimism that we can and will create new technology and social practices that will solve everything, to worry that we won’t be sufficiently motivated or innovative in time to stave off the destruction of our own habitat.

In my years past working at a recycling and salvage operation, I observed a part of the massive flow of waste we generate, much of it perfectly good stuff we just throw away. The sheer volume of it all haunts me still. Photo of Amy Cools by Stephen Loewinsohn for the East Bay Express

Beginning with Rachel Carson’s 1962 book Silent Springenvironmental consciousness is becoming ever more pervasive across the political spectrum. But it seems that ecological responsibility is still an ideal that has not yet changed our behavior except in a few token ways. Even progressive, self-consciously ‘green’ micro-cultures, such as that of the San Francisco Bay Area where I live, generally consume and discard on a very large scale. There’s a strong market here for innovations in green products such as compostable and reusable utensils and packaging, recycled fiber and bamboo clothing, energy-efficient technology, and more. Some of this technology replaces other arrays of products such as CDs, books, ledgers, pens and pencils, camera film, landline telephones, and so on, and could in time reduce the amount of stuff made. Yet new generations and styles of products replace the old ones almost as often and quickly as they are introduced, and the things which the new products replace in turn become trash. And in the case of technology, particularly toxic trash. There are recycling programs, to be sure, but they don’t keep up with the volume of discards, and the recycling process itself can be toxic. And the compostable packaging which cocoons every fashionable new product and every new gadget adds to the deluge. Take-out meal services and ready-to-make meals in a box are ever-increasing in popularity, every breakfast, lunch, and dinner wrapped in a soon-to-be-wad-of-trash. Newly ubiquitous reusable shopping bags and thinly-walled plastic bottles do little in the face of this accelerating volume of throwaway goods and conveniently, disposably-packaged everything.

What does all this mean on a planet now so dominated by humans, materialistic, energetic, intelligent, creative, productive, and exponentially-reproductive?

It does seem that our love of possession is a disease with us, not just in the moral and spiritual sense that Sitting Bull refers to. It’s become something palpable, something we see before our eyes, that we breathe in, that we swim among. It shares characteristics of cancer, growing, proliferating, invading at an accelerating rate, which we still likewise seem powerless to stop. And the gases from the production and decay of all this stuff is changing the climate from the one that gave rise to the evolution of, and now sustains and nurtures, the plants and animals that give us life.

So what do we do? How do we divert or change this deeply ingrained cultural habit, this seemingly unstoppable force that we’ve unleashed?

I think about that other thing Sitting Bull said, about his people respecting him not because he owned many things in the way valued by white people, but because he kept little for himself. How, then, if we shift our values? How if we began to regard the need to compulsively and conspicuously consume stuff as crass, as burdensome, as uncool, as unenlightened, even as pitiable?

This isn’t necessarily as unlikely or even as unimaginable as it might seem. We often take for granted that our love and pursuit of stuff is an immutable trait of the human psyche. Yet, that’s not the case, as evidenced by cultural and spiritual mores that differ widely; we can look to the surprise and disgust of Sitting Bull and his people when encountering the white invaders’ greed for gold, land, and buffalo hides. There is an idea from Japanese culture, mottainai, which has deep roots and is growing again in popularity. This complex and hard-to-translate idea includes a reverence for objects and the value of frugality, both of which preclude the wasteful, polluting consumerist practices of modern market societies. And there are many more cultural and spiritual traditions of long standing in which the possession of more goods than needed is considered a negative.

Asceticism is an extreme variety of this less-is more value, an ancient tradition in which one seeks to reach the highest levels of spiritual perfection by divesting themselves of all or most material goods and comforts. There is also the culture of the traveler and world citizen, those who own little since having too many things to haul around gets in the way of opportunities for adventure. There is also a modern fad, admittedly a rather niche conceit of those with higher incomes, of living in tiny, design-heavy, super-efficient homes, reducing one’s personal possessions to the most utilitarian minimum.

However, these ways of life, admired and admirable as they can be, are not workable for most people. Except for asceticism, they are also unaffordable for most people, and none of these work for those who have families to care for, or those who are elderly or disabled, and so on. What of the least wealthy among us, those who must opt for the cheaper products regardless of whether they’ll wear out and become trash sooner? And what about the joy of shopping for stuff, new and novel things that relieve the monotony and stress of an ordinary working life? Even in this realm of life, however, we do have an awareness that the short-term fun of buying stuff can lead to long-term unhappiness. For example, the extremes of material consumption, hoarding and compulsive shopping, are widely considered destructive and unhealthy, if not forms of mental illness. Expanding this sense of the unhealthiness of having too much stuff can be gradually extended to include things that we might sorta like at first but realize we won’t use much or care about for long. Over time, we can acculturate ourselves to less but higher quality things, and better yet, to value publicly owned goods more highly: parks, museums, public beaches, public buildings, and hopefully in the future, more community- and government- owned public amusement centers such as skating rinks, gyms, arcades, and so on.

Sitting Bull and his family, 1881

And while it might seem too difficult to inculcate the value of less-is-more, we can remember that many deeply-ingrained cultural values and habits have been purposely and quickly shifted. The right of gay people to marry and enjoy other equal benefits of society are now generally taken for granted when only two decades ago legal gay marriage was unimaginable to most. Smoking is widely considered unhealthy and a public nuisance, through just a few decades of education, public awareness campaigns, and taxation. Bullying, racist and sexist slurs, discriminatory practices, and many, many other bad habits, once so culturally pervasive, are no longer respectable.

While shopping and owning a lot of stuff might not seem to be habits as bad as any of the above, I believe that we’ll soon recognize that it might be. Now that there are so many of us in the world, we can no longer consider ourselves as morally responsible beings only as individuals when it comes to the health of our environment. With well over seven billion people on the earth increasing exponentially, we are now responsible to each other in the ways our actions contribute to the aggregate effects. Let’s make the effects of our presence on the earth not resemble those of a disease. Let’s instead make it more aligned with mottainai by treating the earth as the most precious object there is; more akin to the role of earth-steward as the God of Genesis called on his human creation to be; more akin to Sitting Bull and his generous less-is-more spirit. Our physical and spiritual health and our very lives depend upon it.

*Listen to the podcast version here or on Google Play, or subscribe on iTunes

~~~~~~~~~~~~~~~~~~~~~~

Sources and Inspiration

Auxier, Randall. ‘Indian Givers‘, Nov 15th, 2013. Radically Empirical blog

Blaisdell. Robert (ed.) Great Speeches by Native Americans. NY: Dover, 2000.

Bruchac, Joseph. ‘Sacred Giving, Sacred Receiving‘, June 20, 2016, Parabola

Cole, Douglas and Ira Chaikin. An Iron Hand upon the People: The Law Against the Potlatch on the Northwest CoastVancouver: Douglas & Mclntyre, 1990 (PDF download)

Her Many Horses, Emil. ‘A Song for the Horse Nation: Remembering Lakota Ways‘. From A Song for the Horse Nation, edited by George P. Horse Capture (A’aninin) and Emil Her Many Horses

Jackson, Joe. Black Elk: The Life of an American Visionary. New York: Farrar, Straus, and Giroux, 2016.

Mottainai: a Philosophy of Waste‘. August, 2015. Interview and discussion with Kevin Taylor by Joe Gelonesi for The Philosopher’s Zone, a podcast of Radio National, Australia.

Pettipas, Katherine. Severing the Ties that Bind: Government Repression of Indigenous Religious Ceremonies on the Prairies. Winnepeg: University of Manitoba Press, 1994.

Rachel Carson, American Experience by PBS, April 18th, 2010

Roth, Christopher E. ‘Goods, Names, and Selves: Rethinking the Tsimshian Potlatch‘, American Ethnologist, Vol. 29, No. 1 (Feb., 2002), pp. 123-150

Sandel, Michael. What Money Can’t Buy: The Moral Limits of Markets. New York: Farrar, Straus and Giroux, 2012.

Sitting Bull‘. Encyclopædia Britannica, April 21, 2017

Thomas Malthus‘. Encyclopædia Britannica.

Is it Moral to Respect the Wishes of the Dead, Above the Living? By Barry Lam

Imagine what a country would be like if every person could secure a vote in elections that happened after their death. If you stated your preferences in your will, you could execute a vote for the conservative, liberal, Asian, or White Separatist candidate, in every election, in perpetuity, and your vote would compete with the votes of the living. Imagine that a legal structure were erected to execute the wishes of the dead, and that the law would side with the dead even when their wishes conflicted with the needs of the living, or with the wellbeing of future generations.

We have overwhelmingly good moral reasons to reject such a society. We believe that with death comes the loss of the right to influence the political institutions of the living. Yet this kind of moral clarity disappears as soon as we move from politics to wealth. There is a huge industry dedicated to executing the wishes of human beings after their death. Through endowments, charitable trusts, dynasty trusts, and inheritance law, trillions of dollars in the US economy and many legal institutions at all levels are tied up in executing the wishes of wealthy people who died long ago. The UK does not fall far behind. As wealth inequality increases, the wealthy today are earmarking large amounts of money from the future economy to carry out their current wishes. The practice is so deeply ingrained in the culture of elite institutions, and such a ubiquitous feature of life, that only in obscure journals in law and philanthropy does anyone express concern about the justice of the practice.

In the US, the wealthy continue to own and grow wealth after their death, and the state can enforce the spending wishes of the dead in many ways. For instance, you may require, as a condition of inheritance, that your grandchildren marry within a religious faith, or that a school be named after you, forbidding a change in name even if the school would otherwise go bankrupt. Alternatively, an individual may secure current and future wealth in a tax-sheltered trust only for descendants, where the money can both grow and be shielded from creditors in perpetuity. A third legal instrument is the charitable trust, where the dead can earmark current and future wealth to some particular purpose considered ‘charitable’ where such purposes are now broad enough to include anything from the care of abandoned guinea pigs to the preservation of Huey military aircraft. Non-profit institutions such as hospitals, museums and universities can have large amounts of their spending constrained by the wishes of dead donors, such as that there be an endowed professorship for the study of parapsychology, or that a certain wing must be set aside for housing individuals of Confederate ancestry.

These practices are, on reflection, quite puzzling. Ideas about what is good to do in the world ought to change with the changing conditions of the world. Funding cancer research is good only in a world in which there is cancer. Giving distant descendants enormous amounts of wealth is good only if they are not sociopaths. And yet, we allow such power to those who are no longer around to know about the world, and who cannot be harmed or benefited any longer from such spending.

In fact, the idea that the dead could lose their rights to control the future is familiar in our moral lives, and this idea gets reflected elsewhere in the law. The state does not enforce your desire that your spouse not remarry. Even if your spouse promises this to you on your deathbed, it would not be illegal for her to break this promise. Businesses do not feel obligated to carry out the wishes of their now-dead founders, even if those founders had strong preferences about the future of the business. These kinds of posthumous desires carry little weight in our deliberations about what we should do now, and we certainly do not erect legal institutions to enforce these kinds of preferences.

However, when it comes to the wishes of the dead with respect to their personal wealth, we grant them many rights. And when you start adding up the wealth tied to the dead, the amount is staggering, likely in the trillions. The current state of wealth inequality together with the ongoing practice of honouring the wishes of the dead, could result in a future economy that will reflect the preferences of a past aristocracy, rather than the majority of those living. Respecting the wishes of the dead can lead to serious intergenerational economic injustice.

William Shakespeare’s last will and testament

The irony of our current practices is that we the living are to blame for sabotaging our own wellbeing. The dead are not around to complain if we were to change these practices; these are our institutions, and any pain we inflict on ourselves from being worse off but for the preferences of the dead cannot honestly be held against the dead. We do not need perpetual trusts to incentivise spending for charitable purposes. Many philanthropists today such as Bill Gates understand that there is greater charitable impact from spending done within one’s lifetime, which is the foundation of the Giving Pledge.

So why do we continue to give the dead such eternal rights? I believe we honour the wishes of the dead out of a misplaced sense of moral duty, as we would feel if we made a deathbed promise to a loved one. But deathbed promises are not unconditional, eternal, nor must they be satisfied at serious self-interested, financial, or moral, cost to the living. They are, instead, a lot like living promises. If I promise my child some candy but, through no fault of my own, the only available candy must be acquired at serious moral cost to some current candy-owner, it is not morally obligatory to fulfil this promise. A promise itself holds some moral weight, but not overriding moral weight.

Another reason we do this is that we have a self-interested desire that our own interests and values be preserved by future people after our own death, on pain that we disappear from the world without any legacy of influence. This existential fear we overcome by permitting institutions to honour the wishes of the dead in order to guarantee a place for our wishes in the future. But it is time to recognise the vanity and narcissism of the practice, and do what is actually best for the living, which is to have the living determine it for themselves.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

Barry Lam is associate professor of philosophy at Vassar College, New York state, and is Humanities-Writ Large fellow at Duke University in North Carolina. He hosts and produces the philosophy podcast Hi-Phi Nation, and lives in Poughkeepsie, NY. (Bio credit: Aeon)

* Note: Barry’s podcast episode on this same topic is excellent, I highly recommend you give it a listen!

~ Ordinary Philosophy is a labor of love and ad-free, supported by patrons and readers like you. Any support you can offer will be deeply appreciated!

*All views and opinions expressed by guest writers are their own and do not necessarily reflect those of Ordinary Philosophy’s editors and publishers

‘Free Speech’ is a Blunt Instrument. Let’s Break It Up, by Robert Simpson

April 15, 1967, Spring Mobilization to End the War, San Francisco

Free speech is important. It guards against governments’ dangerous tendency to repress certain kinds of communication, including protest, journalism, whistleblowing, academic research, and critical work in the arts. On the other hand, think of a doctor dispensing bogus medical advice, or someone making a contract that she plans to breach, or a defendant lying under oath in court. These all involve written or spoken statements, but they don’t seem to fall within the domain of free speech. They are what the legal theorist Frederick Schauer at the University of Virginia calls ‘patently uncovered speech’: communication that warrants no special protection against government regulation.

However, once we extrapolate beyond the clear-cut cases, the question of what counts as free speech gets rather tricky. A business whose website gets buried in pages of search results might argue that Google’s algorithm is anti-competitive – that it impedes fair competition between sellers in a marketplace. But Google has dodged liability by likening itself to a newspaper, and arguing that free speech protects it from having to modify its results. Is this a case of free speech doing its proper work, or an instance of free speech running amok, serving as cover for a libertarian agenda that unduly empowers major corporations?

To answer this question, we need a principled account of the types of communication covered by free speech. But attempts to provide such an account haven’t really succeeded. We can pick out cases on either side of the divide – ‘Protections for journalism and protest? Yes! For perjury and contracts? No’ – but there aren’t any obvious or natural criteria that separate bona fide speech from mere verbal conduct. On the contrary, as theorists have told us since the mid-20th century, all verbal communication should be understood as both speech and conduct.

Some authors see these definitional difficulties as a fatal problem for the very idea of free speech. In There’s No Such Thing as Free Speech: And It’s a Good Thing Too (1994), the American literary critic and legal scholar Stanley Fish argued that ‘free speech’ is really just a rhetorically expedient label that people assign to their favoured forms of communication. There’s a grain of truth in this; but it doesn’t change the fact that governments still have a tendency to repress things such as protest and whistleblowing, and that we have good reasons to impose institutional safeguards against such repression if possible.

Instead of throwing out free speech entirely, a better response might be to keep the safeguards but make their sphere of application very broad. This is roughly what happens in Canadian law, where nearly any type of conduct can fall within the constitutional ideal of ‘free expression’, provided that it is trying to convey some kind of meaning. The downside is that if nearly anything can qualify as ‘expressive’ in the relevant sense, then we cannot categorically privilege expression itself as an inviolable norm. WhistlAll we can ask lawmakers to do is factor in the interests that such expression serves, and try to strike a balance with all the other, competing interests (such as ‘equality’, for example, or ‘national security’). While such trade-offs are standard in Commonwealth legal systems, they have the unwelcome effect of making it easier for governments to justify their repressive tendencies.

I’d propose a third way: put free ‘speech’ as such to one side, and replace it with a series of more narrowly targeted expressive liberties. Rather than locating actions such as protest and whistleblowing under the umbrella of ‘free speech’, we could formulate specially tailored norms, such as a principle of free public protest, or a principle of protected whistleblowing. The idea would be to explicitly nominate the particular species of communication that we want to defend, instead of just pointing to the overarching genus of ‘free speech’. This way the battle wouldn’t be fought out over the boundaries of what qualifies as speech, but instead, more directly, over the kinds of communicative activities we think need special protection.

Take the idea of public protest. Standard free-speech theory, concerned as it is with what counts as speech, tends to draw a line between interference based on the content of the speech, such as the speaker’s viewpoint (generally not allowed), and interference that merely affects the time, place and manner in which the speech takes place (generally allowed). But this distinction runs into trouble when it comes to protest. Clearly governments should be blocked from shutting down demonstrations whose messages they oppose. But equally they shouldn’t be able to multiply the rules about the time, place and manner in which demonstrations must take place, such that protests become prohibitively difficult to organise. One reason to have a dedicated principle of free public protest, then, is to help us properly capture and encode these concerns. Instead of seeing demonstrations as merely one application of a generic free-speech principle, we can use a narrower notion of expressive liberty to focus our attention on the distinctive hazards faced by different types of socially important communication.

If this all seems a bit optimistic, it’s worth noting that we already approach some types of communication in this way – such as academic freedom. Universities frequently come under pressure from political or commercial lobby groups – such as big oil, or the Israel lobby – to defund research that runs counter to their interests. This kind of threat has a distinctive underlying causal mechanism. In light of this problem, universities safeguard academic freedom via laws and regulations, including guidelines that specify the grounds for which academics can be fired or denied promotion. These moves are not just a specific implementation of a general free-speech principle. They’re grounded in notions of academic freedom that are narrower than and distinct from freedom of speech. My suggestion is that all our expressive liberties could be handled in this way.

The subdivision of expressive liberties isn’t going to magically fix all the genuinely controversial issues around free speech, such as what to do about search engines. However, we don’t need to resolve these debates in order to see, with clarity and confidence, that protest, journalism, whistleblowing, academic research and the arts need special protection. The parcelled-out view of expressive liberties captures the importance of these activities, while sidestepping the definitional problems that plague standard free-speech theory. These are not merely theoretical advantages. Any time a country is creating or revising a bill of rights, the question of how to protect communicative practices must be considered afresh. Multiple expressive liberties is an approach worth taking seriously.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

Robert Simpson is a lecturer in philosophy at Monash University in Melbourne, Australia. He writes regularly about social and political philosophy. (Bio credit: Aeon)

~ Ordinary Philosophy is a labor of love and ad-free, supported by patrons and readers like you. Any support you can offer will be deeply appreciated!

*All views and opinions expressed by guest writers are their own and do not necessarily reflect those of Ordinary Philosophy’s editors and publishers

 

Raconteur Street Blues, by East Street Prophet

Painting on a wall, photo by East Street Prophet at 518 Song of My People

I grew up around some of the great narcissists of our time. History won’t remember them, so I have to. They were great storytellers, who forged a knack for survival into an unequivocal hunger to live like kings. They spoke of riches and wealth that they couldn’t have possibly known, yet painted a picture so alluring we had no choice but to believe. They were raconteurs, wizards possessed of a singular illusion that painted the world in their image and presented it to us, as if it were ours.

A Raconteur is a person who excels in telling anecdotes. Also, an anecdote (Please note: I don’t want to insult anyone’s intelligence. I mean to provide clarity.) is a usually short narrative of an interesting, amusing, or biographical incident. A raconteur is a great storyteller. I’ve always considered the word to be closer to ‘being a good bullshitter’, which is worth its weight in gold. Anyone can tell a story, but getting people to care is a miracle akin to walking on water.

Storytellers are plentiful. You can see them in coffee shops behind laptops, biding their time until they have a chance to share, connect and separate. It’s in that singular moment, where we connect, that things change. They can become dangerous in a moment’s notice, as they infect your mind with complex riddles that the storytellers have been working on since the dawn of time. You might wonder, ‘why would a person share such a riddle?’ you can’t think like that. It’s how any good storyteller wants you to think. They want you to assume they have no reason to hurt you. There’s no harm in believing what they believe. There’s no harm in believing them without question.

The thing that all decent ‘raconteurs’ must ask themselves periodically is ‘do I care more about myself than I do the story?’ I’ve lived among some of the great bullshitters of modern history. We heard plenty of stories growing up, yet so few of them added up in a way that it could make me care. The raconteurs possessed this trait that added depth to their stories, not just with what images they infused, but with how they made us feel. We felt involved. They tugged on our heartstrings and moved us toward an end that we couldn’t see. They possessed our future, as we waited for these mindless heathens to comb through the vast wasteland of their psyches in search of an end to whatever narrative they were painting.

Any good story comes from a single point. It’s not the beginning. It’s just a point. They wanted to make a point. They’d lie about having sex, so they’d present a narrative that made the possibility of them having sex seem possible. They’d plant a few mental images here and there, forming past and future around this premise. Ultimately, their goal was to forge a real, however unlikely, narrative, in order to make us believe.

The raconteurs believed what they said. The proof was in their words. They told us to take it from there, because taking a man at his word is as good as taking it in blood… at least when you’re a child. When we were kids we lied and it helped. We had impossible things to accomplish in a collapsing world full of poverty and the imminent threat of some incomprehensible bullshit. We had to hide sensitive information from our parents, while taking advantage of our God-like inertia, limitless energy and simple-mindedness. We had to prove to other kids that we were cool, while, at the same time, making our parents think we’d never do the cool things that get you into trouble. It added to our personal mystique, having accomplished nothing, we needed something to set us apart. We’d lie about drinking and drugs, losing our virginity, feats of the utmost stupidity… you know… harmless bullshit.

Truth is the trickiest thing. Everyone says they want it, but when it’s not something they agree with they have a reaction that makes you wonder. Truth. It’s a funny thing, because I could write out the truth as I see it and (hopefully) half of you would love me and the other would hate me. The trick for any good raconteur is understanding the right formula, while having as full an understanding as you can of the truth. I believe that you can’t write a decent story, even if it sounds like nonsense, without a sense of truth. It has to be written, spoken and lived with conviction. Truth has to appear in every word, exactly as you’ve seen it, while managing not to conflict with the truth, as it is. You should, as a good storyteller, align yourself with the truth in order to make your narrative more honest and compelling.

I never thought about truth when I was young enough to fall for these stories. The morality of lying, as one presents it to himself, so that he might further his ends, has become all the more staggering as I’ve reached adulthood. I’ve been trying to think of the right way to word this question. I doubt it’s perfect, but it needs to be asked. I’m curious as to what everyone believes:

Can you have a moral premise without any evidence?

Some raconteurs have no regard for the truth. In all honesty, as a kid I didn’t care. I was surrounded by some of the greatest storytellers of my time. I couldn’t be bothered to figure out how some of these impossible stories could be real. I believed with all my heart, because I was a stupid kid who still believed in Santa. (FYI I believed in ghosts for longer than I believed in Santa, but I also assumed the ghosts would grant a wish or needed my help or whatever.) These are men who have learned to lie in a way that ‘everyone believes that you believe what you say’. You believe them, no matter the evidence to the contrary, because they, not their narrative, hold up well against the barrage of truth that assaults them on all sides.

They’re not not-sympathetic characters. Their truth is a depressing harangue of emotion and pain that most couldn’t understand. What’s worse, they keep it to themselves. They keep it! They hide all that pain and suffering, but even more, they hide the truth! They move with such intent when they tell their stories, as if revealing a deeper, more significant wisdom, while simultaneously hiding it from the world. It’s in their emphatic gestures, their movements, as if their bodies shift depending on the tone of their narratives, not to mention their eyes… it’s in all these things that those of us who were forced to listen HAD to believe.

We believed it all the more, because we lived it. They borrowed from our lives and, in this way, we added to the false narrative. Storytelling is a necessary skill. It made us feel good in a time where people were laughing at us, because our river was full of poison and visitors had no reason to… visit. The pain of being alive could’ve shown itself in crime and self abuse. For us, it showed itself in acceptance of nonsensical bullshit and downright lies.

Near-possible realities were a simple narrative that captured our attention, which begs the question: why do they need our attention? Evil raconteurs are like evil yogis. You can assume they don’t exist, as if there is no darkness when there is also light, but this is another simple narrative that’s easy to digest. The simple narrative is used to ensnare. You don’t need to talk about angels to be a good raconteur. You have to make people believe. This is that much more significant. You MAKE people believe. You take them on a journey, where they start out as a skeptic and then, through a few twists and turns… holy shit… you just made someone believe in angels.

(Also, if you don’t make them believe, you at least allow them to suspend reality for a time, which is kinda the same, although I admit there are differences.)

Making people believe and sharing with them a deeply personal truth is about as different as water and oil.

For what it’s worth, they thought they were kings, but that never stopped them from fighting to become that oh-so-desirable, and unquestioned ruler of the universe. They lied and stole and fought, but the stories to me became all the more touching. These people, the Raconteurs, were at war with themselves, as well as the truth and as well as a circumstance of poverty and extreme depravity, which was plentiful, in our ever-collapsing society. They fought for freedom: the freedom to be as insane and harmful to oneself as you can get. They fought to make the world a weird place.

Originally published at 518 – Song of My People

East Street Prophet 518 writes beautifully about hometown Rensselaer, just across the Hudson River from Albany, NY, and their experiences within the 518 area code: Albany, Rensselaer, and Troy, and various outlying places as well. They’ve been having a lot of fun with it and creating a bit of ‘folklore’ from local stories at 518 – Song of My People

~ Ordinary Philosophy is a labor of love and ad-free, supported by patrons and readers like you. Any support you can offer will be deeply appreciated!

*All views and opinions expressed by guest writers are their own and do not necessarily reflect those of Ordinary Philosophy’s editors and publishers

Abortion: Conflict and Compromise, by Kate Greasley

View of a Foetus in the Womb, c. 1510 – 1512, drawing by Leonardo da Vinci

A few years ago, when I told a colleague that I was working primarily on abortion rights, he looked at me quizzically and replied, “But I thought they had sorted all of that out in the seventies”. Needless to say, he was a scientist. Still, while the idea that the ethical questions implicated in abortion were somehow put to bed in the last century is humorous, I knew what he meant. The end of the ‘sixties and beginning of the ‘seventies marked watershed developments for reproductive freedom in both Britain and the U.S. – developments which have (with some non-negligible push and pull at the boundaries) continued to set the basic terms of abortion regulation ever since.

In Britain, the 1967 Abortion Act widely legalised termination of pregnancy for the first time and codified the grounds upon which abortions could be legally carried out. Shortly after, the 1973 Supreme Court decision Roe v Wade famously declared that there was a constitutionally protected right to abortion in the United States, albeit with some qualifications. Since those events, there have been no revolutionary changes to the system of abortion regulation on either side of the Atlantic, although there have been many meaningful ones.

Of course, legal resolution by no means signalled the end of moral disagreement about abortion. A significant minority voice has continued to vehemently oppose abortion practice. What was settled back then secured far more of a grudging détente than a happy compromise. (Like so much legislation, the Abortion Act was a product of political expediencies; I once heard one of its drafters describe the pandemonium of last-minute back-room deals in the Houses of Parliament, and the hotchpotch of provisions that emerged from all of the bargaining necessary to get it through.) As such, the political resolutions, whilst enduring, have always been intensely fragile, especially in the US where Christian conservatism and the anti-abortion lobby overlap so much. Of late, that fragility has become increasingly apparent. Recent developments in the United States and elsewhere have revealed just how misplaced any complacency about reproductive rights truly is.

It is, in truth, hardly surprising that abortion compromise is so precarious when one considers the nature of dissent to abortion practice. If one side of that debate really believes—as many claim to—that abortion is murder, akin to infanticide, then it is hard to see how they can ever truly accept legal abortion merely on the strength of its democratic pedigree. Against such a belief, rehearsing the familiar pro-choice mantras about women’s rights and bodily autonomy is a bit like shooting arrows at a Chinook helicopter. For what strength does control over one’s body and reproductive destiny really have when measured against the intentional inflicting of death on another?

Of course, if ideological opponents of abortion rights really believe that abortion amounts to murder, it may be hard to make sense of some of the traditional exceptions they themselves have defended, in circumstances, for example, of rape or incest, or where the pregnancy endangers the very life of the pregnant woman. If killing the fetus is no less than homicide, then how can it be justified even in these dire conditions? We certainly do not permit the out and out killing of born human beings for comparable reasons. This may be an indication that opponents of abortion who make such concessions do not truly, deeply, believe the claim that killing an embryo or fetus is like killing a child. Alternatively, it may just suggest that such concessions are rarely ever authentic, but adopted merely as a matter of political strategizing, to avoid losing moderate support in the wider conflict. If that were true, it would be unsurprising to see those traditional concessions gradually withdrawn as opponents of abortion become emboldened by increasing success.

Either way, defenders of abortion rights have a constant decision to make about how to respond to attacks on reproductive freedom and the denunciation of abortion as a moral horror. The approach most traditionally favoured, at least in public spheres, is to simply ignore all talk about abortion being murder and try to refocus attention on women’s stakes in abortion freedoms. As the Mad Men character Don Draper always quipped, “If you don’t like what’s being said, change the conversation”. This strategy can have its uses, but also its drawbacks. Most importantly, whilst reminding everyone of what women stand to lose through abortion prohibition is likely to strengthen the resolve of those sympathetic to abortion rights, it does nothing to address the consternation of those that are genuinely conflicted about the issue – who are not sure that abortion isn’t murder. As an effort to persuade avowed opponents of abortion rights to think again, it is even more pointless. For those who decry abortion as unjustified homicide do not usually need to be convinced that women can be hugely benefited by it, and harmed by its outlawing. That is not where their main ground of opposition ever lay.

It is for this reason that I think any effective defense of abortion rights must meet that opposition on its own terms, and confront the claims that abortion is homicide and the fetus the moral equivalent of a child. The task can seem daunting; how does one even begin to argue about whether or not unborn human lives have exactly the same right to life as mature human beings? But there are many reflections one can bring to bear on that question, and especially on the question whether, when examining our own or others’ beliefs, we are really committed to the claim that embryos are equal in moral value to human children. For one thing, as some philosophers have pointed out, if we really believed that claim, we may have to ask why infinitely more resources are not devoted to the prevention of natural miscarriage, which, it would follow, is the single biggest cause of child mortality – far greater than famine, disease, or war. At any rate, if defenders of reproductive freedoms do not concern themselves with the fundamental questions of abortion ethics, they are in danger of being left with little effective argument if and when the fragile settlements that have held for some decades threaten true collapse.

This essay was originally published at OUP Blog: Oxford University Press’s Academic Insights for the Thinking World

Can We Have More Than One Friend? According to Montaigne, No, by Manuel Bermudez

Michel de Montaigne, public domain via Wikimedia Commons

The Essais are the perfect mate to accompany anybody, throughout all stages of life. It’s always interesting to explore Michel de Montaigne‘s life and his marvelous book: the Essais. Within his lifespan, Montaigne was able to find true friendship for himself and record its effects therein. Here we propose to navigate Montaigne’s approach to friendship.

In his Nicomachean Ethics Aristotle wrote that friendship was “one soul in two bodies.” Montaigne, on the contrary, always thought that friendship was a free exchange between two people.

Montaigne thought that true friendship was rare. He himself acknowledges to have found only one proper friend in his life: Etienne de La Boétie. And he could enjoy this friendship only for a mere four years. They met as adults and death took Montaigne’s soulmate early. An irreplaceable loss. After La Boétie’s death (in 1563), Montaigne didn’t feel the desire to find a substitute for his dead friend. Perhaps the reason was that our French friend knew intuitively that such a profound bond could only happen once in a lifetime.

Is it possible to have many different friends at any given time? According to Montaigne, true friends are not only scarce, but they should be unique, if only for loyalty’s sake.

If two friends asked you to help them at the same time, which of them would you dash to? If they asked for conflicting favours, who would have the priority? If one entrusted to your silence something that was useful to the other, how would you manage?

— Montaigne, Essais, “On Friendship”

Montaigne-Dumonstier

Portrait of Montaigne by an unknown artist. Public Domain via Wikimedia Commons

The dilemma set here finds an easy solution for Montaigne, since the balance will always incline towards one of them. A succession of these choices would lead him to the real friend. Thus, it would be proved that true friendships tend to uniqueness.

When Montaigne talks about friendship, he does so from his own feelings towards a person of flesh and blood: Etienne de La Boétie. He transferred what he felt for his kindred spirit to the Essais. He loved his friend to the point where he felt despondently lost when La Boétie died. Montaigne attempted to find solace in his writing about La Boétie, even though he failed to portray the true nature of their relationship.

We can find a good deal of mystery in a friendship like these two men had. That strange and powerful empathy that Montaigne tried to describe is difficult to understand. Montaigne concludes that: “They are unimaginable facts for those who have not tried them.”

Montaigne distinguished true friendship from ordinary friendship. Ordinary friendships have, in a way or another, self-interest behind their development. It’s an investment made not with money, but with affection. On the other hand, true friendship is described by the author with the following words:

For the rest, which we commonly call friends, and friendships, are nothing but acquaintance, and familiarities, either occasionally contracted, or upon some design, by means of which, there happens some little intercourse betwixt our souls: but in the friendship I speak of, they mix and work themselves into one piece, with so universal a mixture, that there is no more sign of the seam by which they were first conjoyn’d.

— Montaigne, Essais, “On Friendship”

In an attempt to describe the nature of his friendship with La Boétie, Montaigne concludes with his famous expression: “If a man should importune me to give a reason why I lov’d him; I find it could no otherwise be exprest, than by making answer: because it was he, because it was I.”

We can only add here that in the example of Bordeaux of the Essais, Montaigne wrote first “because it was he.” Later he added “because it was I.”

This essay was originally published at OUP Blog: Oxford University Press’s Academic Insights for the Thinking World

Read Montaigne’s essay ‘On Friendship’ here

Manuel Bermudez is Professor of Philosophy at the University of Cordoba, Spain. He is the author of the Oxford Bibliographies in Philosophy article “Michel de Montaigne.” (Bio credit: OUPblog)

~ Ordinary Philosophy is a labor of love and ad-free, supported by patrons and readers like you. Any support you can offer will be deeply appreciated!

*All views and opinions expressed by guest writers are their own and do not necessarily reflect those of Ordinary Philosophy’s editors and publishers

Why I Am Not Going to Buy a Cellphone, by Philip Reed

by-terimakasih0-public-domain-via-pixabay-croppedIt is mildly subversive and perhaps a little quaint when someone clings to their flip phone and refuses a smartphone. Refusing both kinds of phones is viewed as downright lunacy, especially if the person refusing was born after the mid-1970s. But I’ve never had a cellphone and I’m not going to get one. I have several reasons, and they are good ones.

The first is cost. No cellphone means no monthly bill, no possibility for an upgrade, no taxes, and no roaming charges (whatever those are). In an era of stagnant wages and growing income inequality, it is remarkable that people unthinkingly spend $75 or more per month on something that we hardly knew existed 15 years ago, much less counted as a necessity.

The second is concern for the environment. The manufacture of mobile phones (including raw material acquisition), the power they consume, and the energy used to transmit calls and access the internet all produce significant carbon dioxide emissions. The idea that cellphones are good only for a couple of years is widespread, increasing the number of phones that end up in landfills and leak toxic heavy metals such as copper and lead into the soil and groundwater.

The decisive reason, however, for me to refuse a cellphone is the opposite of everyone else’s reason for having one: I do not want the omnipresent ability to communicate with anyone who is absent. Cellphones put their users constantly on call, constantly available, and as much as that can be liberating or convenient, it can also be an overwhelming burden. The burden comes in the form of feeling an obligation to individuals and events that are physically elsewhere. Anyone who has checked their phone during a face-to-face conversation understands the temptation. And anyone who has been talking to someone who has checked their phone understands what is wrong with it.

Communicating with someone who is not physically present is alienating, forcing the mind to separate from the body. We see this, for example, in the well-known and ubiquitous dangers of texting while driving, but also in more mundane experiences: friends or lovers ignoring each other’s presence in favour of their Facebook feeds; people broadcasting their entertainment, their meals, and their passing thoughts to all who will bear witness; parents capturing their daughter’s ballet performance on their phones rather than watching it live; people walking down the street talking animatedly to themselves who turn out to be apparently healthy people using their Bluetooth.

The cellphone intrudes into the public and private realms, preventing holistic engagement with what is around us. Smartphones only perfect their predecessors’ ability to intrude.

The disembodying and intrusive effects of cellphones have significant implications for our relationships to the self and to others. Truly knowing and understanding others requires patience, risk, empathy, and affection, all of which are inhibited by cell phones. Cellphones also inhibit solitude, self-reflection, and rumination (formerly known as ‘waiting’ and ‘boredom’), which I think are essential for living a good life.

Long before cellphones, human beings were good at diverting themselves from disciplined attention. ‘The sole cause of man’s unhappiness,’ observed the French philosopher Blaise Pascal in the 17th century, ‘is that he does not know how to stay quietly in his room.’ This propensity for diversion was notably confirmed in a recent study where subjects preferred to give themselves electric shocks rather than occupy themselves with their own thoughts for 15 minutes.

Pascal believed that the height of human dignity is thought, and that the order of thought begins with oneself, one’s creator, and one’s end. He linked this kind of thought inextricably to genuine rest and happiness. Avoiding a cellphone allows, for me, space for thinking and so enables a richer, more fulfilling way of life. With fewer tasks to perform and preferences to satisfy, life slows to a pace compatible with contemplation and gratitude.

A cellphone-free life not only helps to liberate the mind, but also the body. The ancient Greek philosopher Anaxagoras presents a different view of human nature from Pascal: ‘It is by having hands that man is the most intelligent of animals.’ We can be pretty sure that Anaxagoras was not anticipating the advent of smartphones. On the contrary, refusing a cellphone enables one to use one’s hands to carry out meaningful activities (playing the piano, gardening, reading a book) in such a way that one is fully absorbed in those activities, so that they reach their height of meaning.

Without a mobile phone, it is easier to concentrate on what is in front of me: my spouse and children, my work, making dinner, going for a walk. I try to choose my activities thoughtfully, so when I do something, I don’t want to be somewhere else. What cellphone users call multitasking does not interest or impress me.

Of course, it’s true that cellphones can be used responsibly. We can shut them off or simply ignore the incoming text. But this takes extraordinary willpower. According to a recent Pew survey, 82 per cent of Americans believe that cellphone use in social situations more often hurts than helps conversation, yet 89 per cent of cell owners still use their phones in those situations. Refusing a cellphone guarantees that I won’t use it when I shouldn’t.

Some people will insist that if I’m going to refuse a cellphone, I should also refuse a regular telephone. It is true that using a landline introduces similar disembodying, mediated experiences as to mobile phones. But there have always been natural and physical limits placed on the use of a regular phone, which is clear from the name ‘landline’. The cellphone’s mobility introduces a radical form of communication by making its alienating effects pervasive. I want to protect what unmediated experiences I have left.

The original meaning of ‘connect’ indicated a physical relationship – a binding or fastening together. We apply this word to our cellphone communications now only as metaphor. The ‘connections’ are ethereal; our words and thoughts reach the upper regions of space next to the cell tower only to remain there, as our devices disconnect us from those with whom we share space. Even though we have two hands, I’m convinced that you can’t hold a cellphone and someone else’s hand at the same time.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

Philip Reed is an associate professor of philosophy at Canisius College in Buffalo, New York State. His scholarly interests are in ethics and moral psychology. (Bio credit: Aeon)

~ Ordinary Philosophy is a labor of love and ad-free, supported by patrons and readers like you. Any support you can offer will be deeply appreciated!

*All views and opinions expressed by guest writers are their own and do not necessarily reflect those of Ordinary Philosophy’s editors and publishers