Personal Responsibility and Collective Action Problems

In a recent essay, ‘But My Brain Made Me Do It!‘, I argue that many attempts to evade or minimize personal responsibility for one’s actions are misguided. The concept of personal responsibility exists not only to impart personal and societal meaning to human behavior, but to assign accountability. After all, if human beings can not be required to fulfill responsibilities or make retribution for harms done, societies could not function and group living would be impossible. Many attempts to evade personal responsibility only consider the reasons why one might easily have acted one way or another, and ignore two other key factors which lend weight and force to it in the first place: whether the person could have acted otherwise, and whether the person was in fact the one who performed the action. Therefore, attempts to isolate only deliberate intention, and to disregard other factors in matters of personal responsibility, undermine the nature and utility of the whole concept.

In the United States, debates about the meaning and ramifications of personal responsibility surround not only crime and punishment issues, but also public policy dealing with collective action problems, such as pollution, overpopulation, gun control, defense, law enforcement, and access to health care. These types of problems result from individual choices en masse, so that personal responsibility may be difficult to assign to any one individual. Yet, these problems would not exist unless all of those individuals choose to act they way they do. Collective action problems affect so many people, are so complex, and are so expensive, that a solution to them requires mass participation: many individuals each required to take part in solving the problem.

Yet solutions are often difficult to find because of the personal responsibility problem: how do we hold particular people responsible for solving a collective action problem when their individual choice is merely a ‘drop in the bucket’, so to speak? If personal responsibility is so narrowly conceived that one is only held responsible when there is a clear and direct link from the act in question to the entirely of the consequence, and they that they must have (mostly) understood the consequence of their action beforehand, than we must allow that no-one can be held responsible for most collective actions problems. But if we take a more robust view, that people can be held responsible for what they do and the consequences that flow from it, even if the consequences cannot be foreseen or intended, then we do have the right to call on the community to do what they can to fix the problem, be it through contributions of money or effort, through reparations, through accepting (just) punishment, or through other means.

In my ‘Brain’ essay, many of my arguments supporting a robust view of personal responsibility are consistent with a typically American conservative viewpoint, though some of my conclusions relating to particular public policies may differ. (For example, when it comes to criminal justice, I favor a reparative/restorative system over a punitive one, and restraint over zeal in enforcement of all but the most serious crimes, but those are topics for other essays.) When I apply the same arguments to collective action problems, however, the result is more consistent with a progressive approach to public policy as well as to morality.

A robust view of personal responsibility, I find, entails that individuals are morally obligated to contribute, through taxes or otherwise, to programs that preserve and promote the health, protection, and basic well-being of society as a whole. I argue this for two reasons: one, it is individual choices, be it in the aggregate, that create collective action problems (I address this issue in a past essay, in my example of the Dust Bowl crisis in mid-century United States, where the individual decisions of farmers to ‘get rich quick’ created a crisis for everyone, including those others who decided to farm more prudently and responsibly.) Therefore, members of a society should contribute to solutions or to make reparations, for the harms to others that result, directly or indirectly, as a result of their choices, Secondly, individuals, as well as society as a whole, often enjoy wealth, comfort, improved health, and other benefits that are derived from the reduced circumstances of others. A robust view of personal responsibility would also require that those who enjoy these benefits should pay their fair share for them when they have not adequately contributed for them otherwise (for example, in the marketplace).

Consider the issue of health care, and the debate over whether it should be publicly subsidized.

A typically American conservative position on this issue is that health care should be a free market commodity, because it should be a reward for honest work and its contribution to society. If one is personally responsible for their own actions, then if they do their fair share and work hard, they earn the right to access health care. The market is the mechanism, therefore, that limits the access to health care only to those people who have contributed to society through work. People who do not do their fair share, on the other had, should not get health care as a freebie, coercively paid for via taxation, by wage earners. If people feel like freely donating health care to the poor, fine and good, but they should not be forced to do so.

I sympathize with that position to a limited degree. I now work in the health care industry and see people who I have good reason to believe are gaming the system, quite often, in fact. (I address this issue in another recent essay.) If some people are cheating the system, I agree, they oftne are doing the wrong thing, but, I think, not necessarily. Consider this example: a pair of aging parents find their nest egg, carefully scrounged together through a lifetime of hard work, suddenly threatened by the wife’s recent diagnosis of breast cancer. These parents may be faced with this set of choices a) let the wife die without treatment, b) pay for the treatment, wiping out the life savings with which they would have paid for their retirement and the care of their children c) hide their assets to access free public health care assistance. These parents may feel justified making the third choice, since they feel that their primary moral duty is to save the life of their spouse and to care for their children, that their lifetime of hard work contributed enough to society to earn the moral right to this public assistance, and that they do little wrong gaming a system made corrupt and expensive by greed and political chicanery. I, for one, would find it difficult to condemn such a choice, and in some circumstances, may agree that it’s the most morally justifiable choice.

In my work in the medical office as well as in my years in the work force, I’ve seen far more examples of situations that bear a closer resemblance to the hypothetical situation I presented (closely inspired by a real life one) than to simple cheating out of greed or laziness. I work for a good doctor, who is the only local one in his specialty to see low-income patients on public health care assistance. (The reimbursement rates from many public health care assistance programs are very, very low, and physician’s offices have a hard time keeping their doors open at all if they accept many patients with that insurance.) Therefore, our office cares for many of the working poor as well as the suspected cheaters. Every day, I see elderly people who carry the signs of their past lifetime of hard work as well as people who currently work long, hard hours for little pay, whose health care is paid for through taxation because they can’t afford it otherwise. And I think: that’s how it should be.

That’s because all of us enjoy the benefits that come from the hard work of so many low-income people. We get to eat plentiful, cheap food because other people toil long hours with little pay in fields, restaurants, and factories. We get to wear comfortable, well-made clothing and stuff our wardrobes to a degree that no-one but the wealthiest of aristocrats used to enjoy, again, because others work in miserable, boring, depressing conditions working practically for nothing. I live in Oakland’s Chinatown, where I am surrounded by the hardest-working people I’ve seen in my life, other than the (largely immigrant and children of immigrant) people I worked with in the food industry, and these people, too, receive pitiful remuneration for the vast contributions they make to your life and mine.

When you and I pay a few cents for an apple, or a few bucks for a shirt, or a couple hundred for a computer, we do not pay our fair share, to my mind. The market may have driven prices and wages down, but when we’ve purchased those things, we’ve only fulfilled our part of the bargain between the buyer and the seller. We have not, however, fulfilled our personal responsibility towards all those other people who made our wealth possible. We have paid for our own life of comparative wealth and ease in an exchange that buys a life of privation for another.

So when you and I buy that cheap apple, that cheap shirt, that cheap computer, our decision to do so creates an economic situation in which many other people earn poor wages. And those poor wages, in turn, mean that people can’t afford to buy health care, or indeed, enjoy those benefits of society that their work makes possible in the first place. In the long run, it’s our fault, even if indirectly, that other people can’t buy health care, because this situation arises as a consequence of our own choices, our own actions. And this is only one example in which individual actions cause collective action problems. Other examples are pollution, overpopulation, natural resource depletion, systematic racism, traffic jams…. The list goes on and on.

So here’s a question with which I would challenge those who don’t like to feel responsible, or to hold other people responsible, for such collective action problems, including so many American conservatives: why is it that you should be personally responsible for your economic well-being by choosing to do your part and work hard, but you should not be held personally responsible for the consequences of your choices in the marketplace for others who work hard? As an example we’ve already considered shows, we can follow the chain of consequences readily from our own market choices to their collective impact on the lives of others. People, out of self-interest, choose to pay less for food if they can, usually without questioning why it’s cheap. But for food to be cheap, it’s generally because wages are low (in combination with improved technology, which can increase efficiency; but sometimes, new technology means workers have to compete with it, again lowering wages). Individual choices to buy cheaper produce cause wages to be low: they benefit from the reduced circumstances of others. And healthcare, even in more efficient, less corrupt systems than ours, tends to be expensive, because of the high cost of the education of doctors and of research and development, and because it’s labor intensive (each doctor’s visit often requires a significant input of time to be effective), so low wage earners usually cannot afford adequate health care. Therefore, our personal decision to buy cheap produce causes many others not to be able to afford health care. Why, then, would we not be held to any level of responsibility for the consequences of our actions when it comes to access to health care?

We already accept the idea of personal responsibility for individual contributions to collective action problems in many other areas of life. In order to enjoy the legal right to drive, for example, we’re required to purchase driver’s insurance. That’s because our own decision to drive can have debilitating and fatal consequences for others, even if they are entirely accidental. Almost no-one intends to maim or kill another when getting behind the wheel, yet we accept that when we choose to drive, we are still personally responsible, in one way or another, for what happens as a consequence. We also accept that since we desire and enjoy such benefits and freedoms as the right to go our way unmolested by other people, to vote, to travel on public roads and bridges, and so on and so forth, we are responsible for contributing to those institutions that solve collective action problems, and contribute to the maintenance of the military, the police, infrastructure, legal system, and so forth, thorough our tax contributions and otherwise.

As intelligent social creatures, human beings have conceived and developed societies organized according to and supported by robust conceptions of personal responsibility, demonstrated by such human products as morality and law. Instead of operating primarily from a ‘me and mine’ outlook, the most successful and long-lasting, and I argue, the happiest persons and societies operate from a predominantly ‘us and ours’ mentality, with the ‘me and mine’ enjoying even greater benefits than pure self-interest could produce. (The earliest Christian communities adopted this influential philosophy and practice, with great success and to their great credit; consider the tale of Ananias, who, out of greed, did not contribute the same percentage as others towards the welfare of all. Contrast this with the later incarnations of the Church, which retained the rhetoric and abandoned the practice of equal personal responsibility for, and equals enjoyment of, the public good.)

In sum, a robust view of personal responsibility leads us to act more responsibly in our day to day actions and, in turn, to generally behave in such a way that has the best outcomes. We come to act as Immanuel Kant’s categorical imperative would have us do, to ‘Act only according to that maxim whereby you can, at the same time, will that it should become a universal law’. When each of us realizes that our day to day actions often have not only immediate and personal but wide-reaching consequences, our behavior changes. And when we wish that the consequences of our actions are beneficial or at the least not harmful, our behavior changes for the better, our imagination expands, and the world becomes a richer and safer place for us all.

But My Brain Made Me Do It!

Brain illustration from The Principles and Practice of Medicine...' by W Osler, 1904, public domain via Wikimedia CommonsThere’s a common idea which leads many people (myself included) to instinctively excuse our own or others’ less-than-desirable behavior because we were under the sway, so to speak, of one or another mental state at the time. This is illustrated especially clearly in our justice system, where people are routinely given more lenient sentences, given the influence of strong emotion or of compromised mental health at the time the crime was committed. “The Twinkie Defense” is a(n) (in)famous example of the exculpatory power we give such mental states, where Dan White claimed that his responsibility for the murder of two people was mitigated by his depression, which in turn was manifested in and worsened by his addiction to junk food. We routinely consider ourselves and others less responsible for our wrong actions if we’ve suffered abuse suffered as children, or because we were drunk or high at the time, or we were ‘overcome’ with anger or jealousy, and so on.

But when we think about it more carefully, there’s something a little strange about excusing ourselves and others in this way for doing wrong. What we’re saying is, in a nutshell, “But my brain made me do it!”

It’s strange because no matter what we do, our brains always ‘made me do it’.

Perhaps these kinds of excuses are a relic of the ‘ghost in the machine’ variety of traditional belief, in which we are a kind of meat-machine body ‘possessed’ by a soul. In that view, the strong emotion or mental illness, then, could be yet another kind of possession which overrides the rational and moral soul.

Yet, even if one believes that sort of explanation, that’s not a very satisfying justification of why we would accept some mental states as exculpatory and not others. Why does extreme anger, such as that of a wronged lover or a frustrated driver, excuse or partially excuse an attack, but lust does not? Both are powerful emotions that all too often promote the worst behavior. Why is jealousy so often considered an acceptable excuse, but greed is not? And the ramifications of this view go beyond relieving ourselves from the burden of responsibility: it means we can’t take or give credit for our good actions, for the most part. Emotions and states of mental health inspire and make us capable of doing those things, too, in just the same way. But we generally don’t tend to have the same sort of intuitions or apply the same sort of reasoning to credit as we do to blame.

So whatever theory or philosophy of mind we ascribe to, we need to explain why we are so inconsistent on this issue. Of course, much of it is explained in cultural terms: in an honor-driven culture, for example, anger and jealousy result as an affront to one’s honor, so to feel those emotions is just and right for an honorable person. A crime committed under the sway of these emotions, then, is mitigated by the justness and rightness of the feelings, even if they inspired wrong action. On the other hand, greed and lust are not considered a just or right emotional reaction in any case in a culture underpinned by fairness, equality, and individual rights; therefore, any crime committed under their influence would, of course, be contrary to those values and not be mitigated

Yet culturally-derived explanations, of course, aren’t the same as a justification. They just explain why people in different times and places happen to react and feel the way they do; they don’t offer a justification of why anyone should accept one emotional state and not another as mitigating factors, nor do they explain why we should think some emotions somehow make one less responsible or able to control their actions than other emotions do.

The latter, of course, is an empirical question: a neuroscientist may be able to detect that the processes that produce some emotions make it impossible or at least highly unlikely that a person can engage in rational thought, or ‘put on the brakes’, so to speak, when certain provocations occur. But until we find out otherwise, it appears evident, from the fact that most people are generally cooperative and don’t purposefully harm one another, that adult people above a certain basic intelligence level are generally capable of forming good and responsible habits, which makes it unlikely that they would react wrongly or criminally when provoked or titillated. This is especially true when the people involved in their upbringing, and the society in which they live, expect good behavior from all, and hold people responsible for bad behavior.

This is true whether or not behaving in the right way is easy. Many of the excuses offered in defense of people who do wrong sound, to me, simply as evidence that it was harder for the person to behave well than to behave badly at the time. Yet mores and laws don’t exist because it’s always easy for people to get along, respect one another, to help one another and avoid harm. They exist because it’s often hard to be a good person and a good citizen. So many of these excuses, then, do much to illustrate why mores and laws need to exist, and not so much to demonstrate why the offender was less responsible for their own behavior at the time.

The reasons that we can hold people responsible for their own actions, whether or not they occurred in an emotionally stormy moment, are the same reasons that people can be admired and given credit for them. The acts and thoughts which we judge praiseworthy as well as blameworthy are those which the person could conceivably have chosen to do otherwise, even if we grant it’s unlikely that they would have chosen otherwise, and that the person did in the capacity of themself. Personal responsibility is a burden, but even more so, it’s an honor. It means that what you do is you, in a very important sense, since the mind is the author and seat of consciousness, and all of its activity is a form of doing. We, in the sense of being a person, a self, are what our brain does.

The brain is not like a pre-determined computer program; within certain parameters, it can be molded and formed, by influences from others but even more so by our own choices, which over time form habits. So it’s up to each one of us to use our judgments, surround ourselves with good influences, and to form good habits: in any given day, in any given life, each person is faced with myriad options in thought and behavior. For those important matters, we stop and reflect, though there are simply too many to judge carefully for each one; most of the time, it’s best to purposely form good habits so that in those countless reactions we have and choices we make, we’ll tend, more easily, to go for the better rather than the worse.

There are, of course, special circumstances to consider in matters of wrongdoing or crime committed by the young, or by a person with a debilitating mental illness, or a person mature in age with undeveloped mental capacities. All of these involve some diminished or absent capacity for exercising judgment in making a choice, and the degree of consciousness the person possesses. Young people, for example, lack the structures of the physically mature brain which makes it capable of making considered decisions and of putting the brakes on powerful-emotion-driven impulses. The prefrontal cortex, where much of the capacity to exercise self-control resides, doesn’t fully develop until after puberty. It seems, then, to make sense that we generally don’t hold the young as responsible for their actions in the way we do adults. Yet, with all we know about how the brain works, I find it astonishing and often horrifying that in the United States we often try the young as adults, teens and even pre-teens, when they commit particularly heinous crimes. I’d argue not only are they incapable of controlling their emotions and of reasoning as fully as adults are and therefore shouldn’t be considered responsible in the same way, but the very heinousness of the crime is evidence of the lack of maturity, of the ability to make rational judgments, which forms the basis of any coherent concept of personal responsibility. The trying of youths as adults in the courts reveal that all too many people haven’t given enough thought to what personal responsibility really means, and don’t have the proper respect for it.

Since it’s always your brain that makes you do anything, culpability should be assessed according to whether or not your brain is capable of making a different choice, again, even if it’s unlikely you might have done so. That even holds true even in many cases of so-called ‘temporary insanity’ or ‘acting under the influence’. Generally, the brain of an adult person who maintains their own survival and enjoys the liberty of an independent adult is also functioning at a level of responsibility. For example, if you run someone down in your car while drunk or texting on your phone, you are probably also a person capable of arranging for a taxi to take you home from the bar or refusing that last drink, or are aware of the huge amount of very widely published evidence we now have that texting is strongly correlated with auto accidents.

In sum: the issue of personal responsibility should not hinge on whether or not it was easy for us to make one choice, to behave one way instead of another, but on whether we ourselves, always the product of a living brain, are capable of doing otherwise.

Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and ad-free, entirely supported by patrons and readers like you. Please offer your support today!

*This piece, originally published Aug. 6, 2014, was edited lightly on Aug. 5, 2016 for clarity and flow

On Morality: Objective or Subjective?

The Good Samaritan by Jean de Jullienne, 1766, after, public domain via Wikimedia CommonsIs morality objective, or subjective?

If it’s objective, it seems that it would need to be something like mathematics or the laws of physics, existing as part of the universe on its own account. But then, how could it exist independently of conscious, social beings, without whom it need not, and arguably could not, exist? Is ‘objective morality’, in that sense, even a coherent concept?

If it’s subjective, how can we make moral judgments about, and demand moral accountability from, people of times, backgrounds, belief systems, and cultures different than our own? If it’s really subjective and we can’t make those kinds of moral judgments or hold people morally accountable, then what’s the point of morality at all? Is ‘subjective morality’ a coherent concept either?

Take the classic example of slavery, which today is considered among the greatest moral evils, but until relatively recently in human history was common practice: could we say it was morally wrong for people in ancient times, or even two hundred years ago, to own slaves, when most of the predominantly held beliefs systems and most cultures supported it, or at least allowed that it was acceptable, if not ideal? Does it make sense for us to judge slave owners and traders of the past as guilty of wrongdoing?

From the objective view, we would say yes, slavery was always wrong, and most people just didn’t know it. We as a species had to discover that it was wrong, just as we had to discover over time, through reason and empirical evidence, how the movements of the sun, other stars, and the planets work.

From the subjective view, we would say no. We can only judge people according to mores of the time. But this is not so useful, either, because one can legitimately point out that the mere passage of time, all on its own, does not make something right become wrong, or vice versa. (This is actually a quite common unspoken assumption in the excuse ‘well, those were the olden days’ when people want to excuse slavery in ancient ‘enlightened, democratic’ Greece, or in certain pro-slavery Bible verses.) In any case, some people, even back then, thought slavery was wrong. How did they come to believe that, then? Was the minority view’s objections to slavery actually immoral, since they were contrary to the mores their own society, and of most groups, and of most ideologies?

Morality can be viewed as subjective in this sense: morality is secondary to, and contingent upon, the existence of conscious, social, intelligent beings. It really is incoherent to speak of morality independently of moral beings, that is, people capable of consciousness, of making and understanding their own decisions, of being part of a social group, because that’s what morality is: that which governs their interactions, and makes them right or wrong. Morality can be also viewed as subjective in the sense that moral beliefs and practices evolved as human beings (and arguably, in some applications of the term ‘morality’, other intelligent, social animals) evolved.

Morality can be viewed as objective in this sense: given that there are conscious, social beings whose welfare is largely dependent on the actions of others, and who have individual interests distinct from those of the group, there is nearly always one best way to act, or at least very few, given all the variables. For example, people thought that slavery was the best way to make sure that a society was happy, harmonious, and wealthy. But they had not yet worked out the theoretical framework, let alone have the empirical evidence, that in fact societies who trade freely, have good welfare systems, and whose citizens enjoy a high degree of individual liberty, are in fact those that end up increasing the welfare of everyone the most, for the society as well as for each individual. So slavery was always wrong, given that we are conscious, social, intelligent beings, because as a practice it harmed human beings in all of these aspects of human nature. Slavery is destructive to both the society and the individual, but many people did not have a reasonable opportunity to discover that fact, other than through qualms aroused by sympathetic observation of so much suffering.

In sum: it appears that in many arguments over morality, where people accuse each other of being ‘dogmatic’, or of ‘moral relativism’, or various other accusations people (I think) carelessly throw at each other, is due to a basic misunderstanding. To have an ‘objective’ view does not necessarily entail one must have a fixed, eternal, essentialist view of morality which does not allow for moral evolution or progress. Likewise, to have a ‘subjective’ view of morality does not entail thinking that ‘anything goes’, or that morality is entirely relative to culture, religion, or belief system. Here, as is the case with so many important issues, simplistic, black-and-white explanations do not lead to understanding, nor to useful solutions to life’s most pressing problems.

* Also published at The Dance of Reason, Sac State’s philosophy blog

Ordinary Philosophy and its Traveling Philosophy / History of Ideas series is a labor of love and ad-free, supported by patrons and readers like you. Please offer your support today!

Healthcare: A Matter of Public Interest, or a Consumer Good?

http://www.learnersdictionary.com/definition/bandage

Looks like this uneasy compromise between private moneyed interests and our national commitment to the life and health of our citizens is off to a rocky start, and I’m afraid it may fail.

I wish Obamacare was the title of a single-payer national healthcare system, or at the very least, a public option (freedom to choose!). Healthcare should be considered essential infrastructure like roads and bridges, or a national defense of the citenzenry like the military, not a mere consumer or luxury good. This is because it is the hyper-social, cooperative side of human nature, where we band together to protect and nourish each other, that makes us a successful species. This, in turn, is what makes the other freedoms we enjoy possible. Remember, we have no freedoms when we’re dead or incapacitated. Human liberty requires human cooperation in order to exist at all.

That is, if we consider preserving human life of greater importance than the license to grab as much money as we want regardless of the harm we cause doing so.

In An Argument, Give Your Opponent the Benefit of the Doubt and You Will Always Win

….in the sense that, you will be more likely to win your opponent’s trust and respect, your own arguments will be better, and you will surely learn something, even if you fail to convince the other.

Among the feedback to a recent essay (a critique), I came across this sort of statement: ‘well, what do you expect? Of course, the author’s a so-and-so, and therefore, you can expect them to be full of it.’

That’s pure intellectual laziness, let alone empty bigotry. Here’s one of the single most valuable lessons I’ve learned over the years, and practiced and honed in my return to university: always, always listen to your opponent’s arguments carefully and respectfully, and examine them from the viewpoint that they might be right. You will then be in a position to actually understand the argument. And, if it turns out to be wrong, then you will really understand why, and your rebuttal will be more likely to be a quality one, less prone to fallacies.

Generosity pays off for everyone in the end.

Hate Crime: First Amendment Issue?

mirrored from http://bplolinenews.blogspot.com/2012/11/bpl-archives-opens-robert-e-chambliss.html

Not too long ago, perhaps three or four years past, I was of the opinion that, as a whole, the idea of a ‘hate crime’ was a bad one, mainly as a result of the following argument:

According to the law, we determine the nature of a crime by what was actually done. If we re-classify it as a hate crime, we’re punishing the criminal for the very thoughts in his/her head, or the content of their speech. At the very least, this is a violation of First Amendment rights. At worst, we’re legitimizing the Orwellian idea of ‘thoughtcrime’.

Upon reflection, however, I realized that this argument misses the point of what I think is the most important reason that some crimes should be classified as hate crimes. When the law is applied to an act to determine its criminality, we already do consider the motivations and thoughts of the actor in the case. For example, if one person causes the death of another, we ask whether the act was purposeful, whether it arose from a moment of extreme provocation or planning, and so on. In other words, intent, which is what was going on in the actor’s mind at the time, is essential for determining the criminal nature of the act.

One of the main reasons for this, why, for example, we consider intentional, deliberate violent crimes worse than off-the-cuff or accidental violent crimes, is how much of a threat the criminal presents to the community. The law says that a person who kills someone out of anger upon catching them cheating with a romantic partner, for example, is considered far less dangerous to a community than a person who plans and then executes a shooting spree in a public place. A person who kills someone after planning the crime ahead of time also presents a larger danger to a community than the ‘heat of the moment’ killer, since they reveal themselves capable of killing at least one person even after sustained reflection. While the danger is still mainly confined to a single target, the killer’s still a potential threat to the wider community in this sort of case since they might become homicidally angry at someone else.

A person who commits a hate crime, however, presents a wider danger to the community because their intent or wish to harm is not aimed at a single target. The target of their hate or anger is an entire class of people, as the evidence of their own expressed intent and beliefs reveal. The harm that they do, or intend to do, or wish to do, is likely to be far more widespread.

In this way, the way the law determines whether or not a crime is a hate crime is very similar, or even nearly identical, to the way it determines whether a homicide is first degree murder, second degree murder, or manslaughter. I think this sort of deliberation is necessary and appropriate, and therefore, I think that the separate classification of hate crime is likewise appropriate. We just need to be careful, as a society, that we don’t become hasty or overzealous in over-applying the term to thoughts and speech alone, or to philosophically or morally repulsive but relatively harmless actions.

* Also published at The Dance of Reason, Sac State’s philosophy blog

Morality Evolves, Thank Goodness! Or, ‘Survival of the Moral-est’

What is a moral community? To whom do we owe love, respect, allegiance, and caring, and why? What does it mean to be ‘good’?

As communications technology progresses at an exponential rate, we’re all coming into closer and more constant contact with people from all over the world. One result of this: some clashes between people of disparate cultures and belief systems seem particularly violent and extreme, but such partisan violence is commonplace throughout history. Yet we’re also cooperating as a worldwide community as never before, adapting practices and beliefs, with an increase in tolerance and mutual respect between people who might have had trouble finding enough common ground for fruitful interaction in times past. Since we can now see the faces, hear the voices, and observe the lives of people far away as if they’re next door, we identify with them more, perceive them no longer as abstractions but as people like ourselves, and come to care for them nearly as much, and sometimes just as much, as people that just so happen to live their whole lives geographically near to us. Conquest, colonialism, ethnic cleansing, eugenics, holy war: all of these violate moral beliefs now common throughout the world. We seem, as a whole, to be enlarging our moral communities.

So what binds these moral communities, and where so the morals by which they live come from?

As a direct result of this new virtual cosmopolitanism (in a sociological rather than a philosophical sense), many are now convinced that morality is relative to cultures and belief systems. All ethic and cultural peoples have moral systems, and while most contain a few common prohibitions and values (child murder is wrong, health is good), all vary on at least some points, and some widely (women should / should make important decisions independent of men; sexual liberty is / is not good).

While the relativistic view of morality is understandable and generally comes from a generous spirit of tolerance, I believe this theory is, for one thing, ultimately of little use in solving the problem of how the worldwide community is to live together. If morality is entirely relevant to belief system, for example, how can we be justified in claiming that a man does wrong when he kills his wife accused of adultery, if his belief system teaches that this is right? How can we be justified in claiming that a trader in finance does wrong when she gambles her clients’ life savings away, when the business culture she works in operates on the premise that this is the right way to do business? A moral theory which explains the nature and workings of human morality  needs to demonstrate that it works, that it offers compelling answers and workable solutions to such challenges, in order to qualify as a candidate for a true theory. If the global human community wants some firm moral grounds on which to promote human flourishing, we need to look elsewhere than moral relativism.

Most importantly, the current evidence from the findings of clinical psychology and other disciplines that study human behavior just don’t appear to support the theory of moral relativism.

Thomas Aquinas

Another moral theory is moral realism, which many believe is the only acceptable alternative to moral relativism. Mores (moral conventions or laws), according to the moral realist, need to be fixed, immutable, and eternal in order to be true or binding. A highly influential, widely accepted version of this view was thoroughly described and explained by theologian and philosopher Thomas Aquinas in the 13th century, which in turn is an interpretation of Aristotle, the great philosopher and logician of 4th century Greece. Aquinas argues that morality is built-in to reality and can be discovered in the “natural law”. Natural law is the totality of the observed, predictable workings of the universe: just as the law of gravity is an unchanging feature of the universe, so is the moral law, and to discover either, we need only carefully observe the world without and within us. To put Aquinas’ view most succinctly: we can derive the ‘ought’ directly from the ‘is’. This is a compelling theory, in my view, reassuring in its promise to deliver concrete and universal results.

David Hume

Yet David Hume, the great Scottish skeptic philosopher, famously overturned this view in the 18th century, pointing out that there is no direct logical connection between the ‘is’ and the ‘ought’. To say something is the case is not the same as saying it should be the case.

One example that illustrates why I think Hume is right is the set of complex issues regarding human reproduction. As Aquinas, modern evolutionary biologists, and indeed most of us, would agree: human beings, and indeed all living creatures, generally have strong instincts to mate, and the biological equipment which most individuals have makes it so that the result of frequent mating is the production of offspring. You could even say that we mate because we are equipped to make offspring, both physically and instinctively.

Here’s the presumed logical connection between the ‘is’ and the ‘ought’ in this case: Aquinas (and many modern evolutionary biologists) would say that because human beings generally reproduce, one of the main, if not the primary, purposes of the human race and therefore, all individual human beings, is to reproduce. Since we’re generally equipped to make offspring, both physically and instinctively, individual human beings should have sex only to reproduce.

‘But wait a minute!’ one might say, with Humean skepticism. The scientific evidence reveals to us that at least half, and probably more, of all offspring who are conceived are naturally aborted by the mother’s body well before birth, sometimes because of genetic defects, the current state of health of the mother, or some other reason. And that’s just before birth. In some places in the world today, and especially in Hume’s day, a very large proportion of all children born die before the age of five because they have no access to effective treatments for most diseases. If you put that number together with still births and natural abortions, you end up with a very, very large number of unsuccessful reproductions, a majority, in fact.

So if you try to derive the ‘is’ from the ‘ought’ in the case of human reproduction, you can just as well end up, logically, with the weird result that since attempts to successfully create offspring are usually unsuccessful, well, then, human beings ought not to reproduce! Most would find this conclusion not only weird but unacceptable, I think for the very good reason that people generally place a high value on continuing the human species and on the individual’s right to decide whether or not to have children.

Now, to be fair to Aquinas, it’s still the case that, despite so many failed instances of reproduction, the human race generally is successful at reproduction. Added to that, say some evolutionary biologists, the fact that all living beings evolved some sort of  reproductive capacity, it’s still the case that every being that has a choice should try to reproduce, whether or not individuals fail. But how if you belong to a species where reproduction is so successful that if everyone reproduces, the species as a whole is threatened from overcrowding? Or, as it is in the case of a highly social species such as humans, the young do very well in a community where there’s plenty of individuals around who don’t have offspring of their own? In fact, the human species (unusually) far outlive their mating years, and biologists believes it’s a survival mechanism for children to also have grandparents to help rear them. Aunts and uncles, cousins, and friends fulfill the same roles in society, as auxiliary parents and as overall contributors to the flourishing of the human race as a whole.

Perhaps Aquinas was only partially mistaken, and instead should have said that we should place a high moral value on the reproductive function of the human species as a whole: maybe having children is one among many virtuous choices we can make. He did allow that some instances of refraining from procreation are good (he was a celibate monk, after all!) but he also used procreative instinct arguments to say that any and all sex acts that don’t involve reproductive intent are immoral This, to me, represents an attempt to derive a universal moral law, then apply it to get a predetermined result: people who don’t fulfill their ‘procreative purpose’ when they do have sex do wrong, and people who don’t fulfill their ‘procreative purpose’ when they don’t have sex do right. (Hmmm… did he accidentally end up a moral relativist here, in this matter at least?)

http://content.time.com/time/nation/article/0,8599,2021519,00.htmlThis is only one of countless examples where we find that morally committed, good people disagree on the fundamentals of what constitutes goodness and virtue. We find that this disagreement is among individuals and communities not only across space, but also across time. A classic example of this in Middle Eastern and Western cultures is the contrast between the moral precepts of the Old and the New Testaments, and also between the morals in the whole Bible compared to the morals of today. According to the New Testament, after all, it was still acceptable to own slaves and prevent women from speaking in church, which would be morally impermissible according to the mores of  most societies and religions today.

It could be that the moral relativist and the moral realist theories are both implausible. Maybe morality is not relative nor immutable: maybe it evolves. Maybe you can say something is true about morality just as you can say something is true about the human species itself: although the human race is neither immutable or eternal, it still evolved. Yet human beings all descend from a common ancestry and are identifiable in that they share in a distinctive spectrum of traits, so the criteria for being human is not relativistic either.

I think that human sexuality, in fact, also provides an excellent example of not only how a species, but how morality itself, evolves.

Originally, for human beings as for most other creatures, sex evolved for the purposes of reproduction, but over time, as our brains got bigger and our behavioral, emotional, and cultural capacities became more and more complex, sexuality began to be expressed for other purposes as well. As our ancestors became more social and formed larger and larger supportive groups, the young were better protected and better fed and therefore, were more likely to survive. The pressure for individuals to reproduce as often as possible was greatly reduced. Human beings (like our large-brained and social cousins the bonobos and dolphins) also began to enjoy sex for its own sake and re-purposed it: using sex to court potential mates, express friendship love, and dominance, use it for recreation, politics, and so forth. Some of these purposes of sexual expression have, over time, come to be recognized as beneficial, some have not, and some are still debated.

Just as sexuality evolved, so have the moral precepts surrounding sexuality. With the exception of certain

Photo: Wikimedia Commons
The Warren Cup

groups who seek enforce a traditional sexual moral code (usually for religious reasons), over the centuries and millennium sexuality has been appreciated as a much more rich and complex domain of human social relations. As we look back in history, a favored few in society have enjoyed a more liberal set of sexual mores (Greek and Roman elite males enjoyed gay sex without fear, and European nobility and royalty not only regularly enjoyed the company of paramours, they were expected to), but as societies became more democratic and open, these liberal moral codes were extended to the general public. Overall, people are now more prone to judge the morality of sexual behavior according to principles of consent, honesty, reciprocity, and respect, rather than a traditional list of prohibitions.

Here’s a story of how the larger evolution of morality could have happened:

In prehistoric times, human social groups were very small, since survival depended both on strong social cohesion to feed and protect everyone, especially the young, and on not exhausting natural resources. Over time, as the body of human knowledge increased and new technologies were created and perfected, human societies grew to include larger numbers of people. So new adaptations, institutions, and practices arose to create and cement human solidarity in these new larger groups who were less homogenous, even though many general characteristics still tended to remain the same (skin tone, hair color, bodily morphology, etc).These adaptations, institutions, and practices included languages, epic stories, religions, national and ethnic identity, cultural practices, political affiliations, and so on. All of these were constantly being invented, were growing, changing… evolving. And as we’ve already seen, the size of our social groups are growing as rapidly as the communicative capacities of technology. To a cosmopolitan (in the philosophical sense this time), it’s likely only a matter of time before the human race will and should consider itself one large community, with a commitment to upholding basic shared moral principles, though particular or localized secondary moral systems could add their own restrictions or requirements.

Morality is not immune to evolution, and I think doesn’t need to be in order to be capable of being understood in terms of true and false. Again, many things change over time, often drastically, and still can be correctly or incorrectly described and referred to. Not only do I think that morality evolves, I’m also glad that it does, comparing some ancient moral codes to some modern ones. Again, we can look to the Bible for examples of this: consider the ancient Biblical endorsements of slavery and genocide, and compare that with their modern near-total rejection. And the Old Testament notion that women and children were chattel that could be killed for any number of transgressions fills most people today with righteous horror.

These and other changes in moral convictions could be entirely attributed to conditioning, of course. Yet, such conditioning that informs the behavior of most individuals can, perhaps, constitute a form of social evolutionary pressure over time. Whatever the precise mechanism(s), when I consider what history and archaeology tells us about moral attitudes over time, and when I put that together with the fact that human beings evolved from small-brained, non-moral creatures, it seems that morality must have evolved too.

For an organism to evolve, it must be a dynamic system, composed of multiple parts that can be added, subtracted, or changed. If morality evolves, it appears that it likewise can’t be reducible to a single foundational principle. If it’s a traditional monist system, there’s no room or impetus for change, since there’s only one, continuous element or substance that determines the nature of the subject at hand. It’s partly for this reason, and partly based on other evidence (such as how human actually make moral decisions), that I suspect that human morality is actually a pluralist system. Our moral judgments result from balancing various norms against one another, combining, elevating, or rejecting one or more depending on the situation at hand. For example, most cultures place a high moral value on personal integrity, reciprocity, mercy, punishing the guilty, love, protecting the innocent and vulnerable, and more, and consider at least a few of these while making each moral judgment. There are many that I just don’t think are reducible to a more basic principle or value.

So where do these moral values come from? How do we justify judgments based on them? How do we

Photo: Wikimedia Commons
The Code of Hammurabi

know how, when, and to whom to apply them?

Morality not only appears to be based on more than one value or principle, it also involves more than one type of mental processes or cognitive tools. Daniel Kahneman provides one account: a fast, instinctive, emotional process, and a slower, reflective one; Daniel Dennett provides a related explanation of the human mind, originally less capable, enhanced by a ‘toolkit for thinking’. We can apply such a tiered or multilevel system to this story of moral evolution. One is instinctive or more basic, where morality appears to originate. The social instincts, such as empathy and cooperation in humans and other intelligent social creatures, belong to the set variously described as Kahneman’s ‘system one’, or as Hume’s ‘passions’.  The slower ‘system two’, Aristotle and Kant’s ‘reason’, is the part of us that self-consciously reflects on our own emotions and thoughts. So far as we know, only human beings engage in this sort of multi-level mental activity. What the research of social psychologists as Jonathon Haidt reveal is that most of us, most of the time, make our moral judgments quickly and emotionally, and only justify ourselves afterwards, selecting those arguments that best support our own case and neglecting other concerns. This evidence favors Hume’s theory of ‘passion’-centric morality over Kant’s almost wholly rationalistic theory. Yet, we do apply rationality to create, universalize, and enforce a more consistent, regularized moral system on communities, to the benefit of most.

So the basic, foundational instincts that fostered increased cooperation and a drive toward reciprocity were, over time, bolstered by conscious reflection and perfected, enforced, and made more sophisticated through culture. Sounds to me a little bit like the selective, ‘ramping-up’ natural process of evolution by natural selection!

The evolution of morality can be illustrated by analogizing our moral instincts as the genetic mutations, and the use of our slower reasoning process as the selective pressure that allows the instincts to be enacted, or overrides them in order to create a system that best leads to our flourishing. Consider racism and ethnic hatred, instincts that, even today, seem unhappily all too pervasive. The instinct to bigotry might still be a part of our ‘moral DNA’, so to speak, arising as they probably did in the aforementioned circumstance of reinforcing solidarity in small communities struggling to survive. But over time, as we’ve seen race and ethnic hatred lead to suffering and mass slaughter that need not have occurred, the selective pressure of reason, aware of the lessons of history, overrides these ancient instincts and motivates us to value the widely beneficial attitudes of empathy, tolerance, and a sense of shared dignity instead. We used our reason first to regularize moral instincts into rules that apply to the community at large instead of just to the beneficently inclined. Then, we used our technology to widen the spheres of our moral communities, and these spheres are widening as moral communities absorb into ever larger ones.

Over time, we have developed a concept of goodness as that which fosters human flourishing, those instincts, guided and perfected by a natural-selection-like process and by reason, that inspire nurturing and just behavior on the largest scale possible. We can credit goodness, that expanded, instinct-derived and rationally-perfected sense of justice, reciprocity, and beneficence, as the driver of the human race’s ever more cosmopolitan sense of morality.