O.P. Recommends: Model Hallucinations by Philip Gerrans

Brain illustration from The Principles and Practice of Medicine...' by W Osler, 1904, public domain via Wikimedia Commons

Brain illustration from The Principles and Practice of Medicine…’ by W Osler, 1904, public domain via Wikimedia Commons

Psychedelics have a remarkable capacity to violate our ideas about ourselves. Is that why they make people better?


Psychedelic drugs are making a psychiatric comeback. After a lull of half a century, researchers are once again investigating the therapeutic benefits of psilocybin (‘magic mushrooms’) and LSD. It turns out that the hippies were on to something. There’s mounting evidence that psychedelic experiences can be genuinely transformative, especially for people suffering from intractable anxiety, depression and addiction. ‘It is simply unprecedented in psychiatry that a single dose of a medicine produces these kinds of dramatic and enduring results,’ Stephen Ross, the clinical director of the NYU Langone Center of Excellence on Addiction, told Scientific American in 2016.

Just what do these drugs do? Psychedelics reliably induce an altered state of consciousness known as ‘ego dissolution’. The term was invented, well before the tools of contemporary neuroscience became available, to describe sensations of self-transcendence: a feeling in which the mind is put in touch more directly and intensely with the world, producing a profound sense of connection and boundlessness….

Read this excellent article in full at Aeon: committed to big ideas, serious enquiry and a humane worldview

Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and ad-free, supported by patrons and readers like you. Please offer your support today!


Compassion, Emptiness, and the Heart Sutra, by Ryan V. Stewart

d1185-guanyin252c2bthe2bchinese2bexpression2bof2bavalokiteshvara252c2bnorthern2bsung2bdynasty252c2bchina252c2bc-2b1025252c2bwood252c2bhonolulu2bacademy2bof2barts252c2bpublic2b1One of the chief concerns of philosophy, since time immemorial, has been to properly address the question, “How do I live?” Namely, “How do I live well?” Naturally—for as long as our species has had the wherewithal to question its purpose and condition, the problem of ethics has found itself at the frontiers of human thought. Many moral philosophies have since rushed into that wide gulf between knowledge and truth, systems of understanding and action which attempt to conquer our ethical indecisiveness and color in a void where so much uncertainty exists.

Many traditions prescribe the ideal, virtuous, or noble life. From the ancient, academic, or political—e.g. Epicureanism, utilitarianism, humanism, or libertarianism—to the more mystical or overtly religious—e.g. Jainism, Christianity, or Taoism—many are concerned with how one acts (or can act), or at least how one views oneself in relation to others and to the world at large.

The Buddhist religion—though some prefer to see it as a philosophy—is one such tradition. An ancient and diverse faith, Buddhism is perhaps best known for its thorough and egalitarian moral philosophy. And while it is indeed diverse (containing a huge number of schools, most with their own interpretive methods and styles of meditation and ritual practice), compassion (or karuna) is treated as an important trait of an enlightened being, and a cornerstone of Buddhist thought, in all sects. Thus, throughout the various iterations of Buddhism—from the forest-monk Theravada of Thailand to the Zen habits of Japan—one finds a remarkably consistent system of normative ethics, and a conceptual framework for promoting the wellbeing of all sentient creatures.

That all being said, the ontological concepts which necessitate Buddhist compassion are perhaps most concisely expressed by the Mahayana (“great vehicle”) tradition—one of two or three major branches into which Buddhism is often divided. Buddhism maintains a unique connection between metaphysics and ethics, and the deeply profound philosophies of many Mahayana thinkers, and those presented in a number of the Mahayana sutras (Buddhist sacred texts), can be seen as an attempt to navigate and define that sort of connection.

Where does one begin in exploring such a notion? The corpus of Buddhist literature is impossibly vast, and anyone could spend a lifetime pondering so many mystical works and their commentaries. I would argue, however, that in order to form at least a basic understanding of the Mahayana ethical-metaphysical relationship, we need look no further than the ancient and seminal Heart Sutra.

Written as a dialogue and teaching, the Heart Sutra is a brief monologue on the part of a bodhisattva (an enlightened being who has postponed his or her salvation in order to remain in the world and aid sentient beings) by the name of Avalokiteshvara. Avalokiteshvara (known as Guanyin in China and Chenrezig in Tibet—the latter where the Dalai Lama is considered his living manifestation), whose name roughly translates to “the lord who looks down,”—that is, he looks down upon the world of the unenlightened with charity and love—is a bodhisattva representing perfect compassion. Befitting this disposition, the Avalokiteshvara of the Heart Sutra provides a mortal monk, Sariputra, with a profound teaching, a parcel of great wisdom intended to eradicate the suffering of sentient beings. Avalokiteshvara’s great revelation is that all phenomena are, in fact, “empty.” (Red Pine, p. 3.)

To clarify, the bodhisattva is saying that everything in the world lacks an inherent “self,” or essence. This concept finds its origin in the oldest, pre-Mahayana forms of Buddhism, and the Buddha himself noted that “no-self” is, along with impermanence (anicca) and unsatisfactoriness (dukkha), one of three “marks” which constitute the nature of reality. Anatta, for the historical Buddha, and for the older Theravada school, mostly implies that nothing in the world can be said to be one’s “self,” (atman), and thus identifying anything as representing the essence of oneself—a “self” made up only of non-self parts (cf. Hume’s “bundle theory”)—is delusory. The self-concept, for the Buddha, is an illusion of essence which binds human beings to false worldviews and causes them to misrepresent reality, thus barring them from enlightenment.

The falseness of self-hood forms a bedrock of Buddhist philosophy. In the “emptiness” (sunyata) of the Mahayana, however, we find anatta further developed. (Nagao, pp. 173-174.) We owe this development chiefly to Nagarjuna, a first-century Buddhist philosopher who founded the Madhyamaka school of Mahayana Buddhism, and developed the philosophies (one most notably being sunyata) inherent in the Mahayana Prajnaparamita sutras. In his text, Fundamental Verses on the Middle Way, Nagarjuna famously states, to this effect, “All is possible when emptiness is possible. Nothing is possible when emptiness is impossible.” Thus emptiness, in this sense, is not mere nothingness, but an open space in which all potential exists. Hence one cannot truly differentiate being and nonbeing, total emptiness and total “allness,” infinite nothing and infinite something. One implies, or necessitates, the other. In the Heart Sutra, Avalokiteshvara explains this to Sariputra when he says, “Form is emptiness, emptiness is form; emptiness is not separate from form, form is not separate from emptiness; whatever is form is emptiness, whatever is emptiness is form.” For the Mahayana school, emptiness is not merely the surrogate for essence, and a blank space where one imagines the self, but the numinous nature of all things.

Now, this emptiness implies another notion (this one common to Buddhism on the whole), “dependent origination.” (Pratityasamutpada.) According to this doctrine, all phenomena exist dependently, depending upon one another in order to maintain their existence. (Encyclopedia Britannica, n.p.) Thus, there is no free, permanent, inherent, and individual existence for any beings or objects. Their causes and conditions are inextricably linked to the unending web of phenomena arising and passing away in the universe. One thing cannot be said to be truly separate from anything else when it exists, in time and space, alongside all else, and when it requires for its existence the (falsely!) extraneous forces of nature, matter, and energy. Thomas J. McFarlane, writing in his 1995 essay “The Meaning of Sunyata in Nagarjuna’s Philosophy,” sums this up nicely when he states, “According to Madhyamika, the root of all suffering lies in… the error of mistaking the relative for the absolute, the conditioned for the unconditioned. We take imagined separation as real, supposed division as given.”

An easier, less flowery way to illustrate these three ideas—no-self, emptiness, and dependent origination—in tandem may be to imagine something made up of familiar parts: A tree, for instance. We all know that trees are made of a variety of components—trunk, roots, branches, leaves. Where, then, lies the “treeness” of the tree, in the tree? Is the tree its roots? Is the tree its leaves? No? Why not? One may say, “‘tree’ is the name we give to the sum of the parts of the tree,” but then, of course, “tree” is reduced to a mere name and concept, not a thing-in-itself. Whatever can be called a “tree” is ultimately made up of non-tree parts, and thus there is no “treeness” at all. Similarly, the self, though imagined as one’s essence or “soul,” can be divided into mental phenomena and parts of the body. (All of which are subject to constant change, hence anicca.) There is no “selfness” for this self, only the experience of a very familiar concept by the same name. Thus Avalokiteshvara tells Sariputra, “Whatever is form is emptiness, whatever is emptiness is form. The same holds for sensation and perception, memory and consciousness.”

We now have a grasp on no-self (anatta), and can see how easily it translates to no-essence-anywhere-at-all (sunyata) by analyzing everything down into its (non-) fundamental components, components which themselves are also made up of other components, and so on and so forth. One can continue dividing things and concepts forever, down to an infinitesimal. (Granted, this is a conceptual exercise, and the author omits any claims about concrete physics for lack of sincere knowledge of the subject. Regardless of Planck lengths and fundamental particles, confer “infinite divisibility.”)

But how does emptiness translate into connection, into pratityasamutpada? Let us continue with the metaphor of the tree, and observe how the tree is not only constructed of non-tree parts, but dependent on the conditions of the world at large for those very constructs: A tree, like all lifeforms, requires a number of inputs from its surrounding environment in order to survive and thrive. Water, soil, and sunlight most readily come to mind. Water, for instance, rains down from clouds, which themselves are formed from atmospheric water vapor. Soil is produced over many years by the degradation of organic matter, and organic matter is contained by other lifeforms, which exist by dint of their evolutionary ancestors. Sunlight reaches the Earth from the Sun, which itself was formed billions of years ago from sparse bits of matter produced in the Big Bang.

What sort of philosophical quandary do we run into here, then? For our purposes it is best to put the question this way: “At what point is the tree no longer itself?” That is to say, “At what point is any “one” thing separable from the causes and conditions that give rise to it, or the causes and conditions that it gives rise to?”

If we take this notion—this pratityasamutpada—to its logical conclusion, we come to recognize that we are, in order to exist, dependent upon components and conditions outside of ourselves; that everything, in fact, is; that the entire universe is one integrated system, and in some sense an entity-unto-itself.

One response to such a situation—and an understandable one, at that—may be that of deep compassion: the fact that we are dependent upon all else, and all else upon us, in some sense (and albeit in a small way), gives us reason to care for the world (and other beings especially) as if the welfare of other things and beings was the same as ours… as if we have no “self” apart from that of the world at large, the welfare of which—from crabs to carpenters—is our concern as beings endowed with the capacity for both suffering and empathy. No doubt, it’s this very suffering which binds us to one another. We, knowing our own pain, can experience it vicariously, through others. Philosophers as diverse as Marcus Aurelius, Hume, and Schopenhauer understood the universality of pain and empathy, and thus their importance in morality.

The Buddha, of course, realized the same. And while you may not find a be-all-end-all answer to the big ethical problems in one particular system, opting instead (as many do) for an eclectic approach to moral philosophy, you have to admit that Buddhism provides one of the most intricate (and practical) answers to our moral quandaries.

As long as human beings have questioned the nature of their freedom and manners of life and livelihood, philosophy has helped fill the void inherent in the realms of “good” and “evil” and everything in between. Buddhism presents us with one particularly bright light, helping to illuminate the murky and ever-indefinite realm of philosophical inquiry.

About the author ~ Ryan V. Stewart is a writer and student from Connecticut. He has been actively writing since 2006, and blogs about everything from mysticism and philosophy to environmental issues, the arts, and personal peeves at The Grand Tangent. He’s interested in the intersection of mysticism, comparative religion, and philosophical analysis (among other things). 

~ Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and is ad-free, entirely supported by patrons and readers like you. Please offer your support today!


Red Pine, trans. The Heart Sutra. Washington, DC: Shoemaker & Hoard, 2004. Print.

Nagao, Gajin. Mādhyamika and Yogācāra: A Study of Mahāyāna Philosophies. Delhi, India: Sri Satguru Publications, 1992. Print.

“Paticca-samuppada.” Encyclopedia Britannica Online. Ed. Anonymous. Encyclopedia Britannica, n.d. Web. .

But My Brain Made Me Do It!

Brain illustration from The Principles and Practice of Medicine...' by W Osler, 1904, public domain via Wikimedia CommonsThere’s a common idea which leads many people (myself included) to instinctively excuse our own or others’ less-than-desirable behavior because we were under the sway, so to speak, of one or another mental state at the time. This is illustrated especially clearly in our justice system, where people are routinely given more lenient sentences, given the influence of strong emotion or of compromised mental health at the time the crime was committed. “The Twinkie Defense” is a(n) (in)famous example of the exculpatory power we give such mental states, where Dan White claimed that his responsibility for the murder of two people was mitigated by his depression, which in turn was manifested in and worsened by his addiction to junk food. We routinely consider ourselves and others less responsible for our wrong actions if we’ve suffered abuse suffered as children, or because we were drunk or high at the time, or we were ‘overcome’ with anger or jealousy, and so on.

But when we think about it more carefully, there’s something a little strange about excusing ourselves and others in this way for doing wrong. What we’re saying is, in a nutshell, “But my brain made me do it!”

It’s strange because no matter what we do, our brains always ‘made me do it’.

Perhaps these kinds of excuses are a relic of the ‘ghost in the machine’ variety of traditional belief, in which we are a kind of meat-machine body ‘possessed’ by a soul. In that view, the strong emotion or mental illness, then, could be yet another kind of possession which overrides the rational and moral soul.

Yet, even if one believes that sort of explanation, that’s not a very satisfying justification of why we would accept some mental states as exculpatory and not others. Why does extreme anger, such as that of a wronged lover or a frustrated driver, excuse or partially excuse an attack, but lust does not? Both are powerful emotions that all too often promote the worst behavior. Why is jealousy so often considered an acceptable excuse, but greed is not? And the ramifications of this view go beyond relieving ourselves from the burden of responsibility: it means we can’t take or give credit for our good actions, for the most part. Emotions and states of mental health inspire and make us capable of doing those things, too, in just the same way. But we generally don’t tend to have the same sort of intuitions or apply the same sort of reasoning to credit as we do to blame.

So whatever theory or philosophy of mind we ascribe to, we need to explain why we are so inconsistent on this issue. Of course, much of it is explained in cultural terms: in an honor-driven culture, for example, anger and jealousy result as an affront to one’s honor, so to feel those emotions is just and right for an honorable person. A crime committed under the sway of these emotions, then, is mitigated by the justness and rightness of the feelings, even if they inspired wrong action. On the other hand, greed and lust are not considered a just or right emotional reaction in any case in a culture underpinned by fairness, equality, and individual rights; therefore, any crime committed under their influence would, of course, be contrary to those values and not be mitigated

Yet culturally-derived explanations, of course, aren’t the same as a justification. They just explain why people in different times and places happen to react and feel the way they do; they don’t offer a justification of why anyone should accept one emotional state and not another as mitigating factors, nor do they explain why we should think some emotions somehow make one less responsible or able to control their actions than other emotions do.

The latter, of course, is an empirical question: a neuroscientist may be able to detect that the processes that produce some emotions make it impossible or at least highly unlikely that a person can engage in rational thought, or ‘put on the brakes’, so to speak, when certain provocations occur. But until we find out otherwise, it appears evident, from the fact that most people are generally cooperative and don’t purposefully harm one another, that adult people above a certain basic intelligence level are generally capable of forming good and responsible habits, which makes it unlikely that they would react wrongly or criminally when provoked or titillated. This is especially true when the people involved in their upbringing, and the society in which they live, expect good behavior from all, and hold people responsible for bad behavior.

This is true whether or not behaving in the right way is easy. Many of the excuses offered in defense of people who do wrong sound, to me, simply as evidence that it was harder for the person to behave well than to behave badly at the time. Yet mores and laws don’t exist because it’s always easy for people to get along, respect one another, to help one another and avoid harm. They exist because it’s often hard to be a good person and a good citizen. So many of these excuses, then, do much to illustrate why mores and laws need to exist, and not so much to demonstrate why the offender was less responsible for their own behavior at the time.

The reasons that we can hold people responsible for their own actions, whether or not they occurred in an emotionally stormy moment, are the same reasons that people can be admired and given credit for them. The acts and thoughts which we judge praiseworthy as well as blameworthy are those which the person could conceivably have chosen to do otherwise, even if we grant it’s unlikely that they would have chosen otherwise, and that the person did in the capacity of themself. Personal responsibility is a burden, but even more so, it’s an honor. It means that what you do is you, in a very important sense, since the mind is the author and seat of consciousness, and all of its activity is a form of doing. We, in the sense of being a person, a self, are what our brain does.

The brain is not like a pre-determined computer program; within certain parameters, it can be molded and formed, by influences from others but even more so by our own choices, which over time form habits. So it’s up to each one of us to use our judgments, surround ourselves with good influences, and to form good habits: in any given day, in any given life, each person is faced with myriad options in thought and behavior. For those important matters, we stop and reflect, though there are simply too many to judge carefully for each one; most of the time, it’s best to purposely form good habits so that in those countless reactions we have and choices we make, we’ll tend, more easily, to go for the better rather than the worse.

There are, of course, special circumstances to consider in matters of wrongdoing or crime committed by the young, or by a person with a debilitating mental illness, or a person mature in age with undeveloped mental capacities. All of these involve some diminished or absent capacity for exercising judgment in making a choice, and the degree of consciousness the person possesses. Young people, for example, lack the structures of the physically mature brain which makes it capable of making considered decisions and of putting the brakes on powerful-emotion-driven impulses. The prefrontal cortex, where much of the capacity to exercise self-control resides, doesn’t fully develop until after puberty. It seems, then, to make sense that we generally don’t hold the young as responsible for their actions in the way we do adults. Yet, with all we know about how the brain works, I find it astonishing and often horrifying that in the United States we often try the young as adults, teens and even pre-teens, when they commit particularly heinous crimes. I’d argue not only are they incapable of controlling their emotions and of reasoning as fully as adults are and therefore shouldn’t be considered responsible in the same way, but the very heinousness of the crime is evidence of the lack of maturity, of the ability to make rational judgments, which forms the basis of any coherent concept of personal responsibility. The trying of youths as adults in the courts reveal that all too many people haven’t given enough thought to what personal responsibility really means, and don’t have the proper respect for it.

Since it’s always your brain that makes you do anything, culpability should be assessed according to whether or not your brain is capable of making a different choice, again, even if it’s unlikely you might have done so. That even holds true even in many cases of so-called ‘temporary insanity’ or ‘acting under the influence’. Generally, the brain of an adult person who maintains their own survival and enjoys the liberty of an independent adult is also functioning at a level of responsibility. For example, if you run someone down in your car while drunk or texting on your phone, you are probably also a person capable of arranging for a taxi to take you home from the bar or refusing that last drink, or are aware of the huge amount of very widely published evidence we now have that texting is strongly correlated with auto accidents.

In sum: the issue of personal responsibility should not hinge on whether or not it was easy for us to make one choice, to behave one way instead of another, but on whether we ourselves, always the product of a living brain, are capable of doing otherwise.

Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and ad-free, entirely supported by patrons and readers like you. Please offer your support today!

*This piece, originally published Aug. 6, 2014, was edited lightly on Aug. 5, 2016 for clarity and flow

Free Will and the Self

Free will and the self. What are they? While they are the two most important phenomena to each and every one of us, they’re notoriously hard to describe. Of course, we ‘know’ what they are: respectively, they are the experience, the feeling, of being in control of our own actions, of our thoughts and behaviors, and of having an identity and a personality that exists over time. Without them, our lives seem pointless: if we have no free will, then we are mere automatons, and we can take no credit and no responsibility for anything we do. If we have no self, then there is no we, no ‘I’, at all.

Experiments and scholarship by neuroscientists, biologists, psychologists, and others have revealed some starling things about the workings of the brain and how human beings think, behave, and make decisions. The field of neuroscience has grown by leaps and bounds in the past few decades since the advent of technologies that allow us to observe the living brain at work, though we’ve been learning much about the brain over the last few centuries by observing the results of damage to its various parts. These results have, in turn, thrown traditional accounts of the self and free will into question.

Many have gone so far as to say that since we’ve discovered that our actions result from the cause-and-effect processes of a physical brain, then we have no free will: our actions and all our thoughts are determined by the cause-and-effect laws of nature. And since we’ve discovered that the sense of self arises from the confluence of the workings of the parts of the brain, and damage or changes to those parts can cause radical changes to our personalities and the ways we feel about the world and ourselves, that the self is an illusion too.

Yet, how can free will and the self not exist when we experience them throughout our lives? Since we can talk about them to one another, they must exist in some sense, at least. And it’s not that they exist in the way that fictional characters in a story exist, for example, or other artificial creations. We experience these phenomena intimately, from the time we attain consciousness early in life, until the time our brains are so aged or damaged that we are conscious no longer. The concepts of free will and the self are ubiquitous in our language, our culture, the very way we think. Read this paragraph again, review all the thoughts you’ve had in the last hour (and ever had, in fact) and you’ll find that the concepts of ‘I’, ‘you’, ‘we’ (distinct selves in the world), combined with the concept of some action or thought purposely performed by one’s self, are a constant theme. In fact, almost everything we talk and think about would be incoherent without these concepts, and all the purposes that drive us would disappear and render all we do meaningless.

So what gives? How do modern discoveries about the workings of the brain jibe with traditional concepts of free will and the self?

It appears the confusion results from the way we use the terms. There are actually two things we’re referring to. One is the actual experience of the phenomena we call ‘the self’ and ‘free will’. The other is how we account for them, how we define them and explain how they work.

Consider what we mean by other terms, such as ‘disease’. At one time or another, we had various explanations as to what these things are, and how they are caused. One popular explanation that convinced people for hundreds of years: disease is the result of the imbalance of the four humours of the body: blood, black and yellow bile, and phlegm. A physician’s job is to restore the balance and so bring about cure. Other explanations are vitalist (life and health are the result of the interaction between some sort of non-physical spiritual ‘force’ or ‘energy’ and a physical body), such as chiropractic and traditional Chinese medicine. Disease is cause by some sort of disruption in the body, and a physician’s job is to correct the alignment and communication channels of the parts of the body so that the vital force can flow freely and restore health. And the most pervasive and popular explanation of disease throughout human history, of course, is that it’s caused by the vengeance of an angry god or the maleficence of evil spirits (witch-burnings, anyone?).

Over time, human beings invented and developed the scientific method pioneered by Francis Bacon, and began to more carefully examine the correlations of disease symptoms with the circumstances in which they occurred (outbreaks of cholera mapped so that the epidemic was revealed to center on a polluted source of drinking water in 1830’s London; the correlation of damage to a particular parts of the brain to the symptoms of brain damaged patients; the dissection of corpses, comparing diseased organs to healthy ones). With the discoveries of the physical causes of disease, by pathogens and by damage to parts of the body, effective cures were finally able to be developed.

Given that the explanations for the origins of disease are based on the understanding that they’re natural, the result of physical processes, and traditional explanations, does that mean that disease can no longer be said to exist? Does that mean we have to come up with an entirely new terminology? I don’t think it does. The term ‘disease’ refers to instances of the body suffering in some way, not functioning as a healthy body does. What we do when we are confronted with the phenomenon of disease is the same as it ever was: we seek to avoid it, we detest being afflicted by it and seeing others afflicted by it, we seek to understand its causes, and we seek to cure it.

Similarly, the denial of the existence of free will and the self is based on the misguided assumption that understanding the inner workings of a thing, in a way incompatible with traditional explanations, is to deny that the phenomena exist at all. To understand that the mind is the product of a physical brain obeying the laws of nature rather than a sort of spirit or soul inhabiting a machine-body is not to say the mind doesn’t exist. The experience of free will and the self is the same either way, and whether what makes the ‘I’ an ‘I’ is better explained naturally or supernaturally makes no difference. We are still agents, it’s still what’s going on in our brains that cause everything we do, and we still make choices, and it’s still ‘we’ that make them.

In sum, discovering how the phenomena we experience that we’ve dubbed ‘free will’ and the ‘self’ really work doesn’t mean that they don’t exist; it just means we understand more about them now. And to me, as to other lovers of knowledge and understanding, that’s a good thing.

* Also published at Darrow, a forum on the cultural and political debates of today


Sources and inspiration:

Dennett, Daniel. ‘On Free Will Worth Wanting’. Interview on Philosophy Bites by David Edmunds and Nigel Warburton. http://philosophybites.com/2012/08/daniel-dennett-on-free-will-worth-wanting.html

Klein, Jürgen, “Francis Bacon”, The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.) http://plato.stanford.edu/entries/francis-bacon/

‘John Snow’, from BBC History series. http://www.bbc.co.uk/history/historic_figures/snow_john.shtml

Kean, Sean. Interview on Inquiring Minds podcast by Indre Viskontes and Chris Mooney, published June 12, 2014 https://soundcloud.com/inquiringminds/38-sam-kean-these-brains-changed-neuroscience-forever

Metzinger, Thomas. The Ego Tunnel: The Science of Mind and the Myth of the Self. New York: Basic Books, 2009 http://www.goodreads.com/book/show/5895503-the-ego-tunnel

Sharif, Azim F. and Kathleen D. Vohs. ‘What Happens to a Society That Does Not Believe in Free Will?’ Scientific American, June 2014. http://www.scientificamerican.com/article/what-happens-to-a-society-that-does-not-believe-in-free-will/

 Wikipedia (various authors): ‘Daniel David Palmer’ (founder of chiropractic), ‘Humorism’, and ‘Traditional Chinese Medicine’