Introducing the Zombie Meditations Reading List (pt.1)

The reading list has been up now for a week or two, and I figured it would be worthwhile to provide a synopsis of each work and explain how each fits into what might loosely be considered the ‘ZM worldview’. This actually might be one of the most efficient methods of making it clear just what “the ZM worldview” actually is. 

The first three books, The Way of Men and Becoming a Barbarian from Jack Donovan sandwiched around Tribe by Sebastian Junger, all deal with the theme of tribalism—more specifically, with the idea that individualism has actually made modern life more anxious, more depressive, and more abrasive by corroding the tribal bonds that we’re wired for.

. In his own words, Sebastian Junger made the decision to write Tribe after:

“I was at a remote outpost called Restrepo. … It was a 20-man position, everyone sleeping basically shoulder to shoulder in the dirt at first and then in these little hooches.

And it was very intimate, very close, very connected, emotionally connected experience. And after the deployment, which was — the deployment was hellish. And afterward the deployment, a lot of those guys missed the combat and they didn’t want to come home to America.

What is it about modern society that’s so repellent even to people that are from there? And my book “Tribe” is an attempt to answer that question.” [1]

If you resonated with the themes of Fight Club, you’ll appreciate these books for taking a more prolonged and serious look at them. As Chuck Palahniuk put it:

“We’re the middle children of history, man. No purpose or place. We have no Great War. No Great Depression. Our Great War’s a spiritual war … our Great Depression is our lives.” – Fight Club

Sebastian Junger is a rather more mainstream figure than Jack Donovan (more than one of his books became international best sellers, and he’s been Oscar–nominated for his work as a documentarian), and his book adds several historical and modern case examples to the picture. For example, he starts off with a discussion of the hundreds of early American settlers captured by American Indians who, upon release, preferred to join American Indian tribes rather than rejoin their own societies—and points out that even after the settlers resorted to instituting harsh punishments to deter settlers from running away in the early 1600s, hundreds continued to do so regardless.

“ … modern society is a miracle in a lot of ways, right? But as affluence goes up in a society, the suicide rate tends to go up, not down. As affluence goes up in a society, the depression rate goes up. When a crisis hits, then people’s psychological health starts to improve. After 9/11 in New York City — I live in New York — and after 9/11, the suicide rate went down in New York, not up. It went down. It improved. Violent crime went down. Murder went down. There was a sense that everyone needs each other.” [1]

Junger starts growing lazy right around the point where he starts discussing solutions, however. On the one hand, he says that the Internet “is doubly dangerous for all — and, again, for all of its miraculous capacity, not only does it not provide real community and real human connection. It gives you the illusion that it does, right? … what you need is to feel people, smell them, hear them, feel them around you. I mean, that’s the human connection that we evolved for, for hundreds of thousands of years. The Internet doesn’t provide that.” He also admits that “humans lose the ability to connect emotionally with people after a certain number, right at 150. So there is a limit to the number of people we can connect with and that we can feel capable of sacrificing ourselves for if need be.”

Yet, when he goes on to suggest actual solutions, he sounds like this: “I think the trick — and this country is in a very, very tricky place socially, economically, politically — I think the trick, if you want to be a functioning country, a nation, a viable nation, you have to define tribe to include the entire country, even people you disagree with.”

But talk to any member of the Armed Forces, and if the conversation goes on for long enough, nearly without exception they’ll make it very clear that the “Tribe” they were really inspired to fight for wasn’t ‘America,’ it was the men standing right next to them. And it’s beyond obvious that a country is made up of more than Junger’s limit of 150 people. What Junger proposes is every bit as distant from our tribal evolutionary nature as modern society is, and if we could just define that core nature out of being a problem so easily, we wouldn’t be in this mess in the first place.

That’s where Jack Donovan comes in with a far more rigorous discussion of what’s needed to replace what we have now. In an essay titled Tribalism is Not Humanitarianism in which he takes Junger to task directly, Donovan writes:

Junger thinks that while life in the modern West is safe and comfortable, some of the reasons returning soldiers find themselves yearning for the war, and why even civilians who have lived through conflict or disaster find themselves remembering their ordeals, “more fondly than weddings or tropical vacations,” may be because they miss being in a situation where their actions truly mattered, and people helped each other out. He writes:

“Humans don’t mind hardship, in fact they thrive on it; what they mind is not feeling necessary. Modern society has perfected the art of making people not feel necessary. It’s time for that to end.”

This is absolutely true, and I’ve written basically the same thing about the general ennui of men in modern post-feminist nations — where their natural roles are performed by corporations and government institutions, and they are reduced to mere sources of income and insemination.

Junger says people in modern society are missing a sense of tribe, and he’s right, but he stops short of either truly understanding or being willing to address the totality of what it means to be part of a tribe. The elements of tribalism that make it fundamentally incompatible with pluralism and globalism go unmentioned or unexamined in Tribe, and what remains is a handful of vague, disappointing bromides about not cheating and treating political opponents more fairly and helping each other so we don’t “lose our humanity.”

… Tribalism is defined as, “strong loyalty to one’s own tribe, party, or group.” Tribal belonging is exclusive and the camaraderie and generosity that Junger admires in tribal people is functional because the group has defined boundaries [my emphasis]. Tribal people are generally not wandering do-gooders. Tribal people help each other because helping each other means helping “us,” and they know who their “us” is. In a well-defined tribal group where people know or at least recognize each other as members, people who share voluntarily are socially rewarded and people who do not are punished or removed from the group. Reciprocity is also relatively immediate and recognizable. There is a return for showing that you are “on the team.”

Junger writes that, “It makes absolutely no sense to make sacrifices for a group that, itself, isn’t willing to make sacrifices for you.”

He recognizes that American tribal identity, to the extent that it ever existed, is collapsing internally when he warns that, “People who speak with contempt for one another will probably not remain united for long,” and remarks, “…the ultimate terrorist strategy would be to just leave the country alone.”

… The nationalistic pluralism of the early United States was a project started by men with similar religious beliefs (however ready they were to die over the details). Those men also shared a similar European cultural and racial heritage. They all looked alike, and while they came from different regions, each with its own quirks and sometimes even a different language, they were all the genetic and cultural heirs to a broader Western heritage that stretched back to the Classical period.

American pluralism was initially a pluralism within limits, but those limits were either so poorly defined or relied so heavily on implied assumptions that membership was progressively opened to include anyone from anywhere, of any race, of any sex, who believes absolutely anything.

… People in Western nations haven’t “somehow lost” the kinds of ancestral narratives and common cultures that unite people and encourage them to look out for each other and stick together. Those cultures and narratives have been systematically undermined by the institutions of Western governments, in favor of a “multicultural” approach that better serves the interests of globally oriented corporations.

That brings me to another book on the list: Ricardo Duchesne’s The Uniqueness of Western Civilization. Uniqueness examines the systematic undermining of these American cultural–historical narratives by academia in rigorous detail, and exposes the flaws in the alternative, revisionist, “multiculturalist” approach that has taken its place. Advocates of the latter camp—who predominate in American universities, and are celebrated in the national media—argue that the extraordinary wealth and power acquired by Western society came late, as a result of nothing other than luck, and will therefore necessarily only be temporary. Duchesne, in contrast, places heavy emphasis on the cultural and intellectual life of the populations the founders descended from—the Indo–European, horse–riding nomads of the Pontic–Caspian steppes, whose culture represented a uniquely aggressive combination of the libertarian and aristocratic spirits.

In April of 2016, the students of Stanford University voted 1,992 to 347 against an effort to restore the college requirement for courses in Western Civilization. In a hostile article in the Stanford Daily, a student writing under the name ‘Erika Lynn Abigail Persephone Joanna Kreeger’ complains that “Stanford is already a four-year academic exercise in Western Civilizations … In her first lecture of the macroeconomics section of Econ 1, the professor rhetorically asked why Africa — yes, Africa — was so poor, and answered … [by failing to] mention colonialism, occupation and capitalism as driving forces in the creation of poverty…,” concluding that “… a Western Civ requirement would necessitate that our education be centered on upholding white supremacy, capitalism and colonialism, and all other oppressive systems that flow from Western civilizations….” and suggesting instead that the University should require “courses that will make students question whether they should be the ones to go forward and make changes in the world…”

It would be interesting to see students with opinions like Ms. Kreeger’s take a look at the actual correlation between Western colonialism and African wealth. (See here.) The per capita GDP in Haiti—which achieved its independence from France in 1804—in 2013 was about $1700. Meanwhile, blacks in South Africa, the most colonized part of Africa by far, have a per capita GDP of about $5800. (See here.) There’s also really no correlation between the wealth of nations and how deeply those nations engaged in colonization. (See here and here.)  The prevalence of these self–deprecating condemnations of Western Civilization as bearing blame for — yes, literally — the entire world’s ills are exactly why a corrective like Duchesne’s is so necessary.

War! What is It Good For?: Conflict and the Progress of Civilization from Primates to Robots by Ian Morris, Ultrasociety: How 10,000 Years of War Made Humans the Greatest Cooperators on Earth by Peter Turchin, and A Country Made By War: From the Revolution to Vietnam—The Story of America’s Rise to Power by Geoffrey Perret all address the same theme through the specific lens of war.

War! What is it Good For? takes a very broad historical approach to the subject, starting all the way back in pre–history and showing that at every single step towards civilization, people have only ever banded together in larger groups (whether individuals into tribes, or tribes into states, or states into countries) because they had to do so in order to overpower or defend themselves against another, outside, larger group. Referencing Stephen Pinker’s research in The Better Angels of Our Nature which shows that the world is actually becoming less violent over time (you were much more likely to be clubbed on the head by a neighbor or invaded by a neighboring tribe in a tribal society than you are today under a state government to be killed by an opposing state in a war or murdered by a neighbor), he expresses his thesis in an aphorism: “War made the state, and the state made peace.” In other words, just as tribes suppress violence between tribesmen because they need to be internally cooperative in order to successfully defend themselves against outside tribes, so states suppress violence between their citizens because they need to be internally cooperative in order to successfully defend themselves against outside states.

To put it another way, the need for violence (in certain contexts) is actually the only reason human beings have ever historically tried to suppress violence (in certain other contexts). Morris also explains that the development of agriculture was as central to the evolution of Western society as it was because it raised the stakes of war: with the development of a system of food production which tethered people to fixed resource bases, suddenly war meant that you could capture territory—and that outsiders were interested in capturing yours. Thus, the need for agriculturalists to band together in groups to protect their collective properties became instrumental in the evolution of the political structure of the West.

Peter Turchin’s Ultrasociety argues exactly the same point, but with a different spin that makes it entirely worth reading both books. Discerning readers may have noticed that there seems to be a tension between the ideas discussed in the first part of this essay and the apparent implication of Ian Morris’ work that the evolution of tribal societies into states is inevitable. Peter Turchin brings this tension full–circle, in his own words:

“Here’s how war serves to weed out societies that “go bad.” When discipline, imposed by the need to survive conflict, gets relaxed, societies lose their ability to cooperate. A reactionary catchphrase of the 1970s used to go, “what this generation needs is a war,” a deplorable sentiment but one that in terms of cultural evolution might sometimes have a germ of cold logic.

At any rate, there is a pattern that we see recurring throughout history, when a successful empire expands its borders so far that it becomes the biggest kid on the block. When survival is no longer at stake, selfish elites and other special interest groups capture the political agenda. The spirit that “we are all in the same boat” disappears and is replaced by a “winner take all” mentality. …

Beyond a certain point a formerly great empire becomes so dysfunctional that smaller, more cohesive neighbors begin tearing it apart. Eventually the capacity for cooperation declines to such a low level that barbarians can strike at the very heart of the empire without encountering significant resistance. But barbarians at the gate are not the real cause of imperial collapse. They are a consequence of the failure to sustain social cooperation. As the British historian Arnold Toynbee said, great civilisations are not murdered – they die by suicide.”

Rounding out the “war” section of the “world history” section of the list, “A Country Made By War” is more a mytho–poetic narrative of America’s conflicts than an argument for a specific thesis—though it does emphasize the ways in which war has impacted daily civilian life: for example, it was World War I that established the trend of mens’ use of wristwatches and safety razors. From an entirely different angle, A Country Made By War helps round out an understanding of how deeply war impacts our daily lives.

Next under the heading of “history”: A Farewell to Alms, in which economic historian Gregory Clark takes a close look at one of the most significant turning points in human history (and of course, again, Western society): the Industrial Revolution. Why did it happen when it happened?

The conventional arguments have always focused on institutional economic conditions, with claims that markets became freer, property rights became more secure, and so on and so forth. But Clark shows that none of this is true—in fact, markets were even freer before the period of time in which the Industrial Revolution took place.

But what did happen is that “Thrift, prudence, negotiation and hard work [became] values for communities that… [had been] spendthrift, impulsive, violent and leisure loving.” In other words, the traits of the upper class spread to the lower classes. As a result, people began to save instead of spend—and that is why capital accumulation finally began to exert its exponential increases on human productivity.

However, this shift in values didn’t happen due to cultural transmission—it didn’t happen because the rich began a campaign of propaganda directed at the pooror because the poor just decided to start being more prudent. It happened because the offspring of the upper classes were replacing the offspring of the lower classes in the population.

What difference between Europe, India, and China explains why the Industrial Revolution happened in the former instead of either of the latter two? According to Clark’s extensive research, the cause was the fact that Europe was much more Darwinian—the upper classes weren’t replacing the lower classes anywhere near as quickly in either India or China over the same period of time.

Since we know that several traits relevant to the ability to delay gratification in pursuit of one’s goals and act with foresight—like conscientiousness, or the tendency to procrastinate, or impulsiveness—are all heavily influenced by genes, the conclusion we end up with is that we literally owe the greatest explosion of wealth and productivity that mankind has ever seen to what essentially amounts to a process of eugenics. 

Find the claim distasteful if you want, but the evidence Clark presents gives incredibly strong reason to believe that it is true. Clark’s thesis is harsh—it suggests that if not for what was essentially a lot of innocent people dying off, the First World wouldn’t have become the First World. But Clark’s case is overwhelming. His follow–up, The Son Also Rises: Surnames and the History of Social Mobility (not included in the list), examines two points: first, he shows just how little social mobility there actually is in any society on Earth—it’s much less than most of us would have expected. Second, he shows that the tendency of the descendants of a particular familial lineage to regress to a ‘mean’ of social standing peculiar to that family despite temporary dips and jumps in each generation lines up exactly with what we would predict on the assumption that genetic inheritance was responsible for the larger bulk of this phenomena. He even tracks the heritability of social status over the course of the Maoist revolution in China, or the transition from the leftist Allenda government in Chile to the right–wing Pinochet administration, and finds that none of these policies had any impact on it at all. That’s not exactly the most exciting track record for anyone who thinks they can make different groups of people become more equal through social engineering.

Update 6/23/2016

New Stuff (6/16/2016)

◙ Two new posts.

Si Vis Pacem, Para Bellum at Right On
and Wising Up to Uncle Tim at Counter–Currents.

◙ Also some new music.

Three new songs up in a collection of ‘rough drafts’ at Soundcloud:

◙ While everyone tries to draw extravagant moral lessons from the sentencing of Brock Turner, let’s see how many people even find out about Chantae Gilman—a woman who broke into a man’s home and raped him in his sleep, only to be sentenced for 9 months on “attempted rape”. People never hear about these “reverse” cases. [Source]

◙ Someone else (a professor) has tackled the claim that “right–wing extremists” kill more U.S. citizen than Islamic terrorists that I addressed in a couple different articles of my own (here and at Counter Currents) recently. “If you include the death totals from 9/11 in such a calculation, then there have been around 62 people killed in the United States by Islamic extremists for every one American killed by a right wing terrorist…” At that rate, Muslims are killing about 1,860 times more people in terror attacks per capita than “right–wing extremists” are. [Source]

◙ This site is offering a raffle giveaway of $500 “unbreakable denim” jackets to anyone who signs up for their newsletter (giveaway is at the end of July). Check out the video of them trying to damage the material.

◙ A new book was added to the Zombie Meditations Reading List:


The gist is that if you have to build your incentive structures around the fact that most of the people you’re building them for are bad people with bad motivations, then you can crowd out good people who actually have good motivations from that system entirely. This is one of several books I would recommend to someone to support the thesis that thinking about economics (either from the right–wing angle of giving people the right sticks and carrots, or the left–wing angle of raising everyone out of poverty) can’t be a replacement for thinking about people (both in terms of culture and in terms of biology, the former of which is always partially an epiphenomena of the latter).

◙ I’d like to take credit for coining the term “Toxic mosquealinity.”

Feel free to steal it and use it for something.

◙ Current mood:


Personal Updates, and a Message to Readers (5/30/2016)

I’ve recently had articles published at both Counter–Currents and Right On

  1. Calling for a Nazi / Social Justice Warrior Alliance” (Counter–Currents)
  2. A Study in Anti–White Media Lies: Are Right–Wing Extremists More
    Likely to Kill you Than Muslim Terrorists?” (Counter–Currents)
  3. Revealed: Black Lives Matter is a KKK Plot” (Right On)

The first essay is a simple repost of an essay that was first posted here. The second essay took an essay first posted here  and made it substantially more concise and to–the–point, while referring back to the original post if the reader wants more elaboration. Even though my essays didn’t come in until nearly the end of the month, both my essays placed #9 and #19 in the top 20 viewed articles of the month (out of 65)!

Finally, the third essay relies on data established in posts here on Zombie Meditations,
and cites them where relevant, but rhetorically transforms them into something that is totally new:


I think the method I employed here is one that will work well for me in the future, because I have a compulsive need to sperg out and obsess over crunching all of the facts and numbers whenever I dig into any new topic at first (and that’s actually the only reason the blog ever actually came into existence in the first place). But most people aren’t going to find that particularly interesting. However, I’m just not great at thinking a topic through logically and thinking about how to make my discussion of it emotionally engaging to an audience at the same time.

So, if I can write the spergy posts out far enough to feel I can justify presenting myself as someone who has the right to talk about the subject firstthen I can use those posts as citations for other posts where I focus on painting an aesthetic onto the data and on forming a voice that’s more compelling for a reader to actually listen to.

It’s a little bizarre that the posts that I’m becoming “known” for at this early stage revolve around race, because outside of writing, race isn’t something I’ve ever thought about on a day–to–day basis. I’ve been asking myself why my writing has shifted in that direction, and I think the answer is that it’s simply a coincidence resulting from the fact that race is a topic where (a) there is way more sheer nonsense to go around, about which there should not even be room for debate over the fact that it’s sheer nonsense, than there is in most other topics; and (b) there are far fewer people around willing to address it and explain what’s wrong with it. My “advantage”, as I see it, isn’t that I have any particular genius so much as that I’m just brave enough to attach my name to the discussion.

But in a world where people are as afraid to do that as they are now thanks to things like this:

I think we’ve reached the point where there is—unfortunately—plenty of value in that alone.

Off–topic, ThatGuyT has plenty of things to say about the event referred to above as well:

“What’s the next logical conclusion? That’s right: I become aggressive. I’m seeing that you’re ready to fucking assault me, so I’m not going to sit here and try to talk to you, I’m going to get you to back the fuck up. And that’s how conflicts in black communities start. It starts with one person going fucking overboard going crazy as shit, and then the next person has to go crazy as shit because they’re in fear of their life from this aggressive psychopathic motherfucker. Granted, it didn’t happen here, but let this be a situation between two black people in pick whatever urban metropolitan city that you want, on the streets—what would be the next step? Fighting, shooting, somebody gets stabbed, somebody gets stomped out,  … When you’re being so fucking irrational, so fucking threatening, how do you expect the other party to react? Now we’re getting into the core issues that Black Lives Matter usually talks about: what is police brutality? … Imagine that someone comes up to a cop acting like this. What the fuck is going to be the reaction? … In many, many, many cases, it is the alternate party, which COMMONLY is black people, escalating the fucking situation for no goddamn reason. … I’ve seen this shit plenty of times in my own community. I live in Atlanta, so I see this shit 24/7.”

In any case, I want to point out to my Patreons that this is a free post that I’m not uploading to Patreon for any pay, as are all three of the above essays posted to Counter–Currents and Right On. I’ve worked hard to establish a reputation that lets you know that if you sign on as a Patreon, I’m going to do everything I can to make sure you’re getting your money’s worth, and I’m not going to charge for anything composed of mere opinions that you could get from a conversation with me on the phone, or from my Facebook feed; nor will I charge twice when essays are redesigned to be sent to other places if I think they’re essentially the same thing you’ve seen before (even though the essay posted to Right On, in particular, is a substantial change of form from anything you’ve seen here before).

The only kinds of posts I’m ever going to charge for are long, in–depth posts that took a lot of time and thought or research to produce, like this essay investigating the relationship between poverty, out–of–wedlock birth, and crime or this essay responding to the claim that neuroscience has refuted the existence of free will that, word for word, ended up being exactly as long as the book whose claims inspired it (Sam Harris’ Free Will, at 13,000 words). So if you’d ever like to help compensate me for time spent rewriting essays for other websites to expand my reach, you can do that with a single donation here (or in the “Donate” button on the sidebar to your right); and if you’d ever like to sign on as a long–term Patreon to support the creation of more research–based essays, you can do that here (or by clicking the fancy double–exposure image of a hand reaching up through trees on the sidebar to your right).

Every single dollar I get from Patreon really will go a long way. Now that I live in the north Georgia mountains, my cost of living is way down, but so are job opportunities. So especially now that I have a child on the way (!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!), every single dollar really will increase my ability to spend time researching subjects that interest me and writing to carry the knowledge I’ve picked up on to others, rather than doing other kinds of work. If you factor in the amount of time I spend reading and learning to build up a repertoire of understanding before I even think about writing on a topic myself (just take a glance at my reading list) as unpaid labor, then the actual hourly pay for this line of work is dismal. I’m doing everything I can—with just over a year of writing behind me now—to turn it into something I can survive and help raise a child on. So if you’ve gained any sort of value from the work I’ve put in here, please consider chipping in whatever you can!

I’ve recently sold all the other pants that I own and purchased a single pair of Vertx Phantom Ops Airflow tactical pants with the money because (a) every single pair of pants I owned was literally getting ripped to shreds to the point that the patches from patching them were falling apart and (b) I wanted to find something that would be durable and last without needing to be replaced for a really long time. My verdict: these things are incredible. The pockets feel indestructible (and have pockets inside of them), and the mesh helps with handling the Georgia heat incredibly well. So if you’d like to help me out and know what your money is going towards, I’m currently trying to replace my wardrobe of old clothing with durable work clothing or tactical clothing from places like Duluth Trading and 5.11 (I really want these boots) that I know I won’t have to worry about replacing any time soon. There’s a serious chance I’m going to end up hunting snakes for meals to survive out here. I’m not asking for delicacies…

Consciousness (IX) — Freedom is a State of Mind (On Benjamin Libet and Sam Harris)

Bad philosophy can corrupt conclusions that are drawn seemingly straightforwardly out of scientific experiments. “Scientism” is one of those words that functions sort of like “cuck” or “RepubliKKKan” or “Christfag” in that using it often does more to signal allegiance to a group than it does to help progress conversations towards truth. In this essay, I want to give a very clear definition to the word “scientism”, followed by a very clear demonstration of a place where it does exist.

(Note: The theme of “scientism” was recently introduced in “Breaking (Down) Bad (Philosophy of Science)”).

“Scientism” is when someone: (1) conducts a scientific experiment, producing empirical results; (2) strains that empirical data through a lens of interpretation – a philosophical lens that requires philosophical defense or refutation – to produce what is ultimately more a philosophical claim than a purely scientific one; and then (3) pretends that this resulting claim involves no philosophy at all, and thus needs no philosophical defense, but has the full backing weight of the authority of Science Itself behind it, and is thus beyond any further argument. In this way, the fallacy of “scientism” allows philosophical premises to get smuggled in past security, and then pretends that the truth of those premises was thereby proven.

Nowhere do we find a clearer demonstration of this fallacy than in the experiments which are claimed to have “empirically” disproved the possibility of metaphysical free will.

Now, the supposedly ‘scientific’ debunker of free will may have valid philosophical reasons for rejecting the possibility of metaphysical free will’s existence. This possibility I will leave aside for now—the target of my argument is the false notion that his science, in and of itself, proves his philosophical conclusions true. The problem is that this only ever appears to him to be the case because he has interpreted his empirical findings through the filter of some philosophical assumptions, rather than others, in the first place—without owning up to it. 

In other words, what makes someone who commits the fallacy of scientism a fraud is that he first claims to be able to convert water into wine, and then when asked to demonstrate this magical ability, quietly pours wine instead of water into his water bottles in the first place (and is oblivious of the fact that he is doing so). When you pour wine into water bottles, it’s no surprise,  that after your demonstration of a magical chant, you end up with water bottles full of wine—and it’s no surprise when you filter a scientific finding through philosophical assumptions that after your argument is finished, you end up with something that “justifies” those philosophical assumptions’ truth. In neither case has anything actually been “demonstrated.”

In short, the only reason anyone can think that any “scientific” experiments so far have ever “scientifically” debunked the possibility of free will is because they actually have philosophical reasons for believing free will doesn’t exist which they aren’t owning up to honestly. These reasons may or may not be ultimately defensible, but if someone is trying to tell us that a scientific experiment has settled the question, they are simply smuggling their philosophy in past security illegitimately. In truth, the “scientific” experiments that have been conducted supposedly on the question of free will add nothing to the philosophical debate, and they have done more to distract us from the central questions than anything.

 _______ ~.::[༒]::.~ _______

Before continuing, I need to establish what I mean when I talk about “free will.” Specifically, I need to make it clear that despite the protests that may come from some, I am going to talk about the sort of “free will” that says that right up until the moment in which I make a free conscious decision, nothing in the previous physical state of the Universe determines what my choice is going to be; and at the moment in which I make my choice, I determine what that decision will be.

Whether or not this is the sort of “free will” that most of us feel as if we experience is an empirical question. And the question of whether this is how our conscious experiences feel is separate from the question of whether we actually do have this type of freedom.

The term for views which admit that this is the kind of freedom that we feel as if we have is “incompatibilism”. “Compatibilists”, by contrast, argue that the only kind of “freedom” that we either do want, or should want, is the kind of “freedom” involved when I choose to do what I want to do because I want to do it; and not, say, because someone is holding a gun to my head—even if my decision and my desire were absolutely set in stone and determined all the way back at the moment of the Big Bang, like ever so many falling dominoes.

While compatibilism basically names a single homogenous position on the question of free will, “incompatibilists” are split into two enemy camps: those who believe that we do have this significant kind of freedom (called “libertarians”), and those who believe we do not (called “hard determinists”).

In my view, while hard determinists are at least honest about the fact that their claim has reason to be unsettling to many ordinary people (because we do feel as if we have the power to make determining choices that are not, themselves, determined, and something about how we see what it means to be human will in fact be disturbed if this is all just one big illusion) and are willing to step up to the plate and argue that the consequences are worth it, “compatibilists” are simply hard determinists who try to weasel out of owning up to and defending themselves in light of these consequences by ignorantly denying—against the protests of anyone who claims otherwise—that anyone cares about the kind of freedom that would come from being able to make “metaphysically free” decisions at all.

The very fact that libertarians and hard determinists exist is all it actually takes to prove the compatibilists wrong: how can you claim that nobody really cares about the libertarian sort of free will when both people who agree with your underlying determinism and people who don’t are telling you that, as a matter of fact, they do care about it?

If that straightforward reasoning wasn’t enough, empirical investigations seem to have settled the question of whether this is how people feel once and for all. In the 2010 study Is Belief in Free Will a Cultural Universal, Sarkissian and colleagues examined ordinary peoples’ “intuitions about free will and moral responsibility in subjects from the United States, Hong Kong, India and Colombia.” Their results proved conclusively that outside of the isolated halls of philosophy departments, the “compatibilist” take that no one cares whether their choices are determined or not is not the norm: “The results revealed a striking degree of cross–cultural convergence. In all four cultural groups, the majority of participants said that (a) our universe is indeterministic and (b) moral responsibility is not compatible with determinism….” Sarkissian concludes that this research reveals “fundamental truth(s) about the way people think about human freedom.”

Again, a hard determinist can describe the way that our conscious experience of decision–making feels just as clearly as accurately and honestly as any libertarian, even as he turns around to deny that we actually have the kind of freedom we feel as if we have. In Free Will and Consciousness: A Determinist Account of the Illusion of Free Will, Gregg D. Caruso writes: “[C]ompatibilists cannot simply neglect or dismiss the nature of agentive experience. … [O]ur phenomenology is rather definitive. From a first–person point of view, we feel as though we are self–determining agents who are capable of acting counter–causally. … (W)e all experience, as Galen Strawson puts it, a sense of “radical, absolute, buckstopping up–to–me–ness in choice and actions”. …  When I perform a voluntary act, like reaching out to pick up my coffe mug, I feel as though it is I, myself, that causes the motion. We feel as though we are self–moving beings that are causally undetermined by antecedent events.”

So why does Caruso conclude that things cannot be as they seem? Quoting from a review, the problem with belief in free will is that it is “committed to a dualist picture of the self. … [And it, therefore,] involves a violation of physical causal closure (pp. 29-42).”

In other words, the argument that free will is impossible rests on the claim that defending a dualistic view of consciousness in general is impossible. Notice that this is ultimately a philosophical argument, and not one that is supposed to be proven as the direct conclusion of a scientific study. In fact, Caruso begins addressing these considerations as early as page 15, while he doesn’t begin to mention the scientific studies which are supposed to have addressed the subject until somewhere past page 100. Caruso’s account is one in which someone cannot believe in free will “without embarassment” because believing in it would require “giving up … atomistic physicalism”.

As usual, the advocates of “atomistic physicalism” make no attempt to shoulder the burden of demonstrating that the hypothesis that human conscious experience is composed of nothing other than blind atoms which themselves lack conscious experience and act blindly only as a passive response to inert causes could even conceivably be capable of allowing human conscious experience to be what it is—to put it in my terms, their claim is the equivalent of claiming one can draw a three dimensional figure on a two dimensional board. Instead, they are content to just demand that one can’t possibly deny that hypothesis “without embarassment” and then chop off anything about the nature our experiences which that hypothesis isn’t capable of explaining—no matter how debased and absurd the resulting picture of what it means to be a human being becomes.

Yet, as we’ve seen, the things we would have to chop off to make that hypothesis work end up including everything—because conscious experience quite simply couldn’t exist in the way that it irrefutably does if the “atomistic physicalist” were correct that the Universe at its root is made out of blind particles and forces, and nothing else, in exactly the same way that three–dimensional objects couldn’t exist if the world were a two–dimensional sheet.

My contention is that the only sane position one can hold is that consciousness itself is one of the things that the Universe is composed of “at its root” as well, and that we are free to posit that consciousness simply possesses properties like experientiality and intentionality as basic elements of what consciousness is, in exactly the same way that we are free to posit that electrons simply possess properties like spin and charge as basic elements of what an electron is—with no need of further explanation. All supposed ‘explanations’, after all, must stop somewhere. On the contrary, it is the “atomistic physicalist” who should be embarrassed to put forward the claim that one could even conceivably get qualitative subjective experiences, or intentionality, out of blind building blocks wholly lacking in either quality.

The existence of free will, unlike these, can at least coherently be denied in theory. But the arguments for throwing out the possibility of free will are identical to the arguments for throwing out intentionality, or subjective experience—and the existence of these features of consciously experienced reality can’t be denied without blatant incoherency. Thus, the arguments used to deny the possibility of the existence of free will fail even if they do not fail in the specific case of free will itself—and there remains no absolutist reason to deny the possibility that metaphysical free will could exist after all. The only remaining question, then, is whether further considerations happen to rule out the existence of human free will specifically.

 _______ ~.::[༒]::.~ _______

The story of supposedly “scientific” refutation of the possibility of free will begins in the 1980’s with a series of studies conducted by Benjamin Libet. Though now more than three decades old, these experiments still constitute the bulk of “scientific” analysis of the implausibility of free will.

In Sam Harris’ 2012 book Free Will, he writes:

“The physiologist Benjamin Libet famously used EEG to show that activity in the brain’s motor cortex can be detected some 300 milliseconds before a person feels that he has decided to move. Another lab extended this work using functional magnetic resonance imaging (fMRI): Subjects were asked to press one of two buttons while watching a “clock” composed of a random sequence of letters appearing on the screen. They reported which letter was visible at the moment they decided to press one button or the other. . . . One fact now seems indisputable: Some moments before you are aware of what you will do next—a time in which you subjectively appear to have complete freedom to behave however you please—your brain has already determined what you will do. You then become conscious of this “decision” and believe that you are in the process of making it.”

Daniel Wegner is one of the most prominent social psychologists known for his continuation of experiments aiming to prove this general sort of idea. In his discussion of Libet’s experiments in the 2002 The Illusion of Conscious Will, he explains the picture of the conscious mind’s role in reality that he still believes the Libet experiments are able to prove:

“Does the compass steer the ship? … [not] in any physical sense. The needle is just gliding around in the compass housing, doing no actual steering at all. It is thus tempting to relegate the little magnetic pointer to the class of epiphenomena — things that don’t really matter in determining where the ship will go. Conscious will is the mind’s compass.”

In other words, determinists who agree with Harris and Wegner believe that preceding unconscious brain events are the cause of both our future behaviors, and our later, illusory feeling of “choosing” those behaviors. It isn’t just that our experiences of choice are determined; it’s that they’re  completely superfluous to the chain of events that even lead to the actual execution of action—to them, the brain activity that can be spotted 300ms before you “decide” to flick your wrist in Libet’s experiment would cause you to flick your wrist, even if it didn’t cause you to feel like you were “deciding” to flick your wrist as an incidental step along the path towards that destination. To them, it isn’t just that our will isn’t “free” when it causes our actions—it’s that our will doesn’t cause our actions at all.

_______ ~.::[༒]::.~ _______

If anyone should allow himself to really sink down in to reinterpreting his moment–by–moment experiences in light of this idea, he will soon realize that it is an excellent recipe for producing the pathological state known as depersonalization. Indeed, according to these people, what a dysfunctional person in the depersonalized state experiences is actually a far closer reflection of reality than what the rest of us experience all the rest of the time. I think we should keep it very clearly in mind that what is at stake here is whether or not science has proven that a pathological state that tends to come comorbid with other pathologies like major depression and schizophrenia reveals fundamental truths about the reality of human consciousness that the rest of us live in illusory denial of.

To repeat the explanation in my words, the Libet–type experiments first have a subject sit down in front of a clock, while hooked up to an EEG (or fMRI). Then, they explicitly instruct that subject to perform some simple motor activity at random. Absolutely nothing is at stake in the decision; there is no goal to achieve, there are no values or variables to weigh or choose between, and no number of button presses or wrist–flicks is too high or too low. There is no way to “win,” there is no way to “fail,” and there are no alternative outcomes in the experiment for the subject to pick between. With absolutely no goals or constraints, subjects in these experiments are told to sit back and perform a perfectly purposeless motion at random for which they have absolutely no reason in principle to choose one moment over another.

Stop right there.

Keep this fact very clearly in mind: we’re using this study to evaluate free will.

Now, ask yourself: does this sort of scenario even seem relevant at all to free will?

Let’s get back into the first–person position on these experiments.

If you agree to join in Libet’s experiment, what are you going to feel?

Imagine I have just told you to repeat Libet’s experiment—that I’ve just said to you: “I want you to sit back, and whenever you feel like it, I want you to flip your wrist over. Then, I want you to do it again. And keep doing it until I tell you to stop.”

What is that going to feel like?

It is immediately obvious that this does not even feel like an exercise of free will.

In fact, it may have felt like an exercise of free will to decide whether or not to join Libet’s experiment at all, or else spend my day doing something else instead. But once I’ve sat down and consented to follow Libet’s instructions, what does my mental activity consist of?

It consists, primarily, of waiting. For what? An urge to move my hand.

To do what? To appear.

In other words, when I sit down and consent to follow Libet’s instructions, I have already made the conscious decision to place myself into a specific, and very peculiar, state of consciousness. I have cleared my mind. I am focusing all of my conscious attention onto my hand. And it is as if I’ve consciously chosen to initiate an automated “program” which orders my subconscious to generate the sensation of an urge to move—at random—while simultaneously holding the intention to act on that sensation, after it appears. I have made the decision to set myself into this state of consciousness, and I am actively holding myself in it for the purposes of this experiment.

Is it not precisely part of my very experience itself that in a case like this, a sensation that feels like a spontaneous “urge” does in fact appear before I make the decision to move?

Of course it is.

So is it any surprise at all to find that brain activity of some sort can be found flickering prior to the time at which I consciously register making the decision to flip my wrist? I don’t think it is. In fact, I think generalizing from a case like this to the conclusion that our decisions in general are determined by subconscious processes before we ever feel as if we’re deciding to make them is downright goddamn idiotic. Sheer introspection alone leads us to expect that we would see brain activity appear prior to our decision to flip our wrists over, because participating in Libet’s experiment would feel exactly like placing myself in the conscious state of waiting for a particular kind of sensation to surface into my conscious awareness before acting.

Libet’s experiment would feel like that. Ordinary exercises of what we feel to be our free will to decide do not. So the simplest conceptual analysis of what would happen in an experiment like the one Libet designed is already enough to establish that these experiments quite simply have no bearing on the matter of free will at all.

So here is the crux: when the Libet study’s interpretors decide to label the preceding brain activity as “the subject’s soon–to–be ‘consciously willed’ decision in a deterministic process of turning into a “decision” under the surface outside of the subject’s conscious mind” rather than “the urge the subject has consciously ordered his subconscious to randomly generate appearing exactly as cued”, that is not science. That is, in fact, philosophical, in that it makes a call about how to bridge subjective aspects of our first–person experience with outward results of third–person observation which cannot be traveled by empirical investigation unaided.

And not only is it a philosophical call—it’s a bad one. 

But the fallacy of scientism goes so unchallenged by the modern mind that for the most part, few people commenting on the Libet experiments have noticed even something that should have been this simple and basic and rudimentary and obvious a hell of a long time ago.

 _______ ~.::[༒]::.~ _______

There are plenty of other disqualifying technical problems with Libet’s experiment, besides. For example, Libet was able to determine that the “readiness potential” preceded the decision to act because he programmed a computer to record the preceding few seconds of brain activity in response to a subject’s muscle activity. In other words, from the very first moment, he never had a damn clue how often “readiness potentials” appeared and did not trigger muscle movement, because what Libet did not do is keep a continuous record of their brain activity, to prove that a “readiness potential” always produced movement; rather, that activity was only recorded in retrospect, when the subject actually moved, and at no other time.

Further studies have made it clear that this was, in fact, a significant problem for Libet’s conclusions: in 2015, a team led by Prof. Dr. John-Dylan Haynes created a video game that would have a subject face off against a computer enemy which was programmed to react in advance to the intention to move as indicated by the human player’s “readiness potentials” (Point of no return in vetoing self-initiated movements). If “readiness potentials” were deterministic, the computer would always be able to predict the human player’s movements in advance and would therefore always win. If they weren’t, then the human player would be able to adapt to the computer’s pre–emptive response by changing his plan mid–course.

And, in fact, that was what the team found.

“A person’s decisions are not at the mercy of unconscious and early brain waves. They are able to actively intervene in the decision-making process and interrupt a movement,” says Prof. Haynes. “Previously people have used the preparatory brain signals to argue against free will. Our study now shows that the freedom is much less limited than previously thought.”

Here’s another problem: in the Libet experiments, the “readiness potential” appeared 550ms (just over half a second) before muscle movement. But here’s what happens if you tell someone to perform a physical action in reaction to a sound: it only takes 230ms, per Haggard and Magno 1999, for someone to decide to perform an action in response to a cue. We therefore know that conscious decisions can be made in less than a quarter of a second. And if conscious decisions can be made in less than a quarter of a second, what basis do we have to assume that something happening a whole half of a second before a decision is made in some other cases is the neurological determinant of the decision itself?

We shouldn’t.

But what’s interesting about these problems is that they would all be entirely unnecessary to go to the trouble to even explore in the first place if anyone had simply paid closer attention to analyzing the notion of Libet’s study design conceptually—a simple momentof clarification of some of the most basic philosophical issues at play in an experiment designed like this could have saved us a lot of wasted time. It would have been clear from the outset what was probably going on.

 _______ ~.::[༒]::.~ _______

In the decades since Libet’s original work, has better evidence come along to support his conclusions? Sam Harris immediately followed up with a statement about Libet with a description of “another lab [that] extended this work using functional magnetic resonance imaging (fMRI)….” The lab he refers to is Chun Siong Soon’s[1], and the summary of the 2008 study published in Nature Neuroscience can be seen here.

While the activity measured in this study was still, as before, purposeless, with no goals or constraints, it did change one substantial thing. According to the way Soon (et al.) summarized their own research—in a summary paper titled “Unconscious Determinants of Free Decisions in the Brain”—

“There has been a long controversy as to whether subjectively ‘free’ decisions are determined by brain activity ahead of time. We found that the outcome of a decision can be encoded in brain activity of prefrontal and parietal cortex up to 10 s before it enters awareness.”

The actual point this new study was supposed to add to the already–existent debate was that it was supposed to establish the capacity of these scientific measurements to predict not just the general timing of a single choice, but now in fact which of two—count them, two!—equally meaningless choices the subject would choose between. And the conclusions we are supposed to draw from this are, again, wide–reaching—returning to the summary from Harris:

“One fact now seems indisputable: Some moments before you are aware of what you will do next—a time in which you subjectively appear to have complete freedom to behave however you please—your brain has already determined what you will do. You (only) then become conscious of this “decision” and believe (falsely) that you (“you”) are in the process of making it.”

What do the particular new facts drawn by this study really add to the picture?

There is one thing that neither Harris’ reference to this study, nor Soon (et al.)’s own summary of it in Nature Neuroscience, will clearly tell you—quoting Alfred Mele:

“ … the predictions are accurate only 60 percent of the time. Using a coin, I can predict with 50–percent accuracy which button a participant will press next. And if the person agrees not to press a button for a minute (or an hour), I can make my predictions a minute (or an hour) in advance. I come out 10 points worse in accuracy, but I win big in terms of time. So what is indicated by the neural activity that Soon and colleagues measured? My money is on a slight unconscious bias toward a particular button—a bias that may give the participant about a 60–percent chance of pressing that button next.”

Notably, this 60–percent figure is a drop from a predictive value of 80–90% in cases where the moment chosen to commit a single predefined action like Libet’s wrist–rotating is what is being predicted. Even with the increased understanding of neurophysiology developed over the past handful of decades, and even with refined neuroimaging techniques, the predictive power of the “readiness potential” in this study still immediately drops by 20%—down to little over chance*—with even a slight shift of the design of the experiment towards something that comes just ever so marginally closer to resembling the kinds of decisions in which we actually deliberate—and feel as if we deliberate freely—over a choice.  (*Remember, you’d have about 50% accuracy if you were just guessing, so 60% is even less impressive than it sounds at a glance, because you should be comparing that 60% accuracy to a baseline of 50%)

But yet again, even if the predictive value of the “readiness potential” in these expanded cases were 100%, why should even that have concerned me? When I go into Soon’s laboratory, I am walking in deliberately setting the conscious intention in advance to sit back and think about nothing other than letting myself push either one or the other button at random. Absolutely nothing weighs on the decision; I am by definition putting myself in the peculiar conscious state of waiting to act on a random urge which I have no reason for caring about. Even with this meaningless “choice” between two absolutely meaningless options added to the scenario, it doesn’t even feel like the kind of deliberation in which I feel as though I possess the power to do otherwise. In the case of Soon’s experiment, just like Libet’s, participating would feel exactly like waiting for some sensation to rise up into conscious awareness out of my subconscious, at which point I have already set the intention to act on it when—meaning after—it appears.

So even a study design like Soon’s would have nothing to say about free will even if it found that it could predict my decision 100% of the time (because perhaps all the brain scans are identifying is the appearance of the impulse–sensation that I’ve walked into Soon’s lab agreeing to sit and wait for). But the meager results of these studies turn out to be even less impressive than that. By far.

 _______ ~.::[༒]::.~ _______

As I said in the opening chapter of this series,

At these stages of argument, it should not be mistaken that I am ever arguing that the reason we should reject a physicalist account is just because it dehumanizes us (in the sense of “making us feel dehumanized,” or at least being something which arguably should). Rather, if a physicalist account should be rejected, it should be rejected first and foremost because it either explicitly denies, or else by failing to be able to account for them implicitly denies some parts of what we really, truly, in fact and in reality, actually are. However, an intrinsically connected component piece of this picture is that if an account does explicitly or implicitly deny some aspect of what we really are, then believing an objectively impoverished account of the world may lend itself to a subjectively impoverished internal or relational life.

Believing in the claim of solipsism, for example (e.g., that my subjective experience is the only one that truly exists in the world, whereas everyone else is something like a figment of my imagination, lacking actual internal experiences completely, so that life is quite like a computer game in which everyone else is artificially computer generated while I am the only actual player) would—first and foremost—be a philosophical mistake. However, we would be justified to oppose that mistake both because of the objective, abstract errors that it commits as well as, simultaneously the internal, emotional, and social consequences that would likely result from someone’s believing it: the two are, in other words, not necessarily separable—solipsism would have these consequences because of its mistakes, and those mistakes are important because of the consequences. Where arguments for the socially or psychologically detrimental consequences of physicalist accounts are made, they should not be mistaken for emotional appeals to consequences which simply argue that we must believe these accounts are false because we shouldn’t want them to be true; we have (so I will claim) all the demonstrable reasons for believing them false we should need. But if accounts of the world and the self are factually impoverished, they will arguably lead to an impoverished relationship to the world and to the self and others in consequence, and we can oppose them for both reasons at the same time.

The point extends into our present discussion of free will.

Not only is it the case, as previously noted, that the majority of respondents from the United States to India to Colombia believe that “moral responsibility is not compatible with determinism”; it actually has been recorded repeatedly that altering someone’s belief in free will impacts their moral behavior.

In 2008, Kathleen D. Vohs and Jonathan W. Schooler found that prompting participants with a passage from The Astonishing Hypothesis (in which the researcher Francis Crick writes, “You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. Who you are is nothing but a pack of neurons.”) made them significantly more likely to cheat on a math test.

In their first experiment, “cheating” involved failure to press the space bar on a keyboard at an appropriate time—so in order to rule out the possibility that disbelief in free will simply made participants more passive in general, they conducted a second experiment in which “cheating” would involve active behavior (namely, overpaying themselves for providing correct answers to a multiple choice test). Going even further, the second experiment also tested the impacts of increasing participants’ belief in free will. And once again, those whose belief in free will was strengthened cheated less, while those whose belief in free will was undermined cheated more.

In 2009, Roy F. Baumeister and colleagues expanded this line of research further. In a first experiment, participants were presented with hypothetical scenarios and asked how they felt about helping individuals described as being in need—and those who were prompted with disbelief in free will were significantly less likely to help. The second experiment offered participants a description of a fellow student whose parents had just been killed in a car accident, and then presented them with an actual opportunity to volunteer to help—those who were prompted with disbelief in free will were still significantly less likely to volunteer here even when the situation actually became real.

Finally, participants in the third experiment were told they were helping the experimenter prepare a taste test to be consumed by an anonymous stranger while being given a list of foods the stranger liked and disliked. This list explained that the stranger hated hot foods most of all—and participants, after being sorted into groups prompted with various beliefs about free will, were judged according to how much hot sauce they poured onto the stranger’s crackers. Participants who were told that free will doesn’t exist before the experiment gave the taste–tester twice as much hot sauce as those who read passages supporting the ideas of free choice and moral responsibility.

Jonathan Schooler, writing in Free Will and Consciousness: How Might They Work? explains:

“One possibility is that reflecting on the notion that free will does not exist is a depressing activity, and that the results are simply the consequence of increased negative affect. However, both Vohs and Schooler and Baumeister et al. assessed mood and found no impact of the anti–free will statements on mood, and no relationship between mood and prosocial behavior. … Baumeister et al. argue that the absence of an impact of anti–free will sentiments on participants’ reported accountability and personal agency argues against a role of either of these constructs in mediating the relationship between endorsing anti–free will statements and prosocial behavior. … [But] just as priming achievement–oriented goals can influence participants’ tacit sense of achievement without them explicitly realizing it (Bargh, 2005), so too might discouraging a belief in free will tacitly minimize individuals’ sense of accountability or agency, without people explicitly realizing this change.”

And so, as an empirical matter of fact, what happens when you give people an ideological license to loosen their senses of accountability and agency, they find excuses to be assholes. 

“ … We are always ready to take refuge in a belief in determinism if [our] freedom weighs upon us
or if we need an excuse.” — Jean–Paul Sartre

 _______ ~.::[༒]::.~ _______

A bizarre series of intellectual double standards underlines the equivalent attempt to defend the value of spreading belief in determinism. Determinists have long rested on the supposed immorality of retribution to stake their claim that spreading belief in determinism should help create a more “ethical” world. As the story goes, we only want to see someone who commits a moral offense suffer for the sake of suffering because we believe that they “freely chose” to act as they did. Supposing someone commits a public act of violent rape, then if we assume that he was beyond all capacity for control of his impulses, we’ll want to help him not do that again instead of punish him. Thus, many liberals hope that spreading belief in determinism would help create a public consensus for shifting the motivations upon which the criminal justice system is centered away from retribution, and towards rehabilitation instead.

But why should that follow? If the violent criminal is without any deep moral form of guilt for his act because he has no deep moral responsibility for anything at all, then I too am without any deep moral form of guilt when I desire to see him violently punished for it—I hold no deep moral responsibilities for my actions or desires either, after all, so why shouldn’t I “excuse” myself for wanting to see him severely punished in just exactly the way that I “excuse” him for his act of rape? The determinist can give no reason—or at least not one that actually requires belief in metaphysical determinism.

In The Atheist’s Guide to the Universe, Alex Rosenberg argues that “the denial of free will is bound to make the consistent thinker sympathetic to a left–wing, egalitarian agenda about the treatment of criminals and of billionaires.” But why should it do that? Naively, Rosenberg thinks that if we conclude that criminals do not deserve to suffer and that billionaires do not deserve to reap the benefits of wealth because there is no such thing as “deserving” in the moral sense because there is no such thing as free will, then it follows that we will want to be nice to criminals and redistribute the wealth of billionaires.

What’s overlooked in this is that if there is no such thing as “deserving”, then criminals do not “deserve” to remain free in the society they’ve committed harms against any more than they “deserve” to be punished by it. It’s not as if the fact that they don’t “deserve” to be punished entails that they do “deserve” not to be, because when we eliminate the entire concept of “deserving” by eliminating free will, we aren’t objecting to one isolated claim that someone in a particular circumstance deserves a particular thing; we’re eliminating all such claims. Likewise, if determinism is true, then billionaires may not “deserve” their wealth; but they also do not “deserve” to have their wealth taken away from them, and the general public does not “deserve” to have the wealth that billionaires have created given to them either. Only if free will does exist—and there are some things that individuals hold more or less responsibility for—in differing degrees in different cases—can we reasonably talk about who “deserves” what at all. 

Finally, Sam Harris makes the rather utopian claim that promoting belief in determinism should allow us to rid the world of hatred entirely. And in response to those who “say that if cutting through the illusion of free will undermines hatred, it must undermine love as well”, he responds:

“Seeing through the illusion of free will does not undercut the reality of love … loving other people is not a matter of fixating on the underlying causes of their behavior. Rather, it is a matter of caring about them as people and enjoying their company. We want those we love to be happy, and we want to feel the way we feel in their presence.

But hatred, he says, in contrast,

is powerfully governed by the illusion that those we hate could (and should) behave differently. We don’t hate storms, avalanches, mosquitoes, or flu. We might use the term “hatred” to describe our aversion to the suffering these things cause us—but we are prone to hate other human beings in a very different sense. True hatred requires that we view our enemy as the ultimate author of his thoughts and actions. Love demands only that we care about our friends and find happiness in their company.”

Wait a second.

Couldn’t everything Harris just said to justify his claim about hatred apply to love, too?

In fact, we could reverse everything that Harris just said about both love and hatred, and his statements would seem exactly as “rational” as they did before. Consider how it would sound:

“Hating other people is not a matter of fixating on the underlying causes of their behavior. Rather, it is a matter of not caring about them as people and not enjoying their company. We want those we hate to be unhappy if we can’t avoid their loathsome presence.

But love? Love is powerfully governed by the feeling that those we love choose to be who they are. We don’t love ice cream, video games, mosquitoes, or getting over a flu. We might use the term “love” to describe our attraction to the pleasure these things cause us—but true personal love goes deeper in a very significant way. True love requires that we view those we love as the ultimate author of their thoughts and actions. Hatred demands only that we feel the fleeting desire to cause someone unhappiness.”

I think it is clear that the half of Harris’ argument that should be granted is that belief in free will is necessary in order to “truly hate.” However, just as Harris’ distinction between true hatred and hyperbolic ‘hatred’ holds, so does a distinction between true love and hyperbolic ‘love.’ And just as Harris’ determinism only allows room for hyperbolic ‘hatred’ but not the “real” kind, so it only allows room for hyperbolic ‘love’—where the sense in which I “love” my wife is no different in kind from the sense in which I “love” owning a new pair of pants or buying a new iPod. And as Dan Jones writes, the same necessarily goes for principles like forgiveness and gratitude:

“Harris believes that true hatred — the kind we direct towards evildoers, as opposed to mere dislike — implies an untenable view of human behaviour, in that it depends on an incoherent concept of free will. The same must go for forgiveness. It would be daft to talk of forgiving a mountain for an avalanche, but for Harris it must be equally daft to talk of true forgiveness among humans — for what is there to forgive in a deterministic system, whether a mountain or human?

The same goes for gratitude. You might be thankful that a mountain provided good slopes for skiing one day, but that’s not the true gratitude you show to your friend for teaching you how to ski in the first place. This true gratitude must too fall beneath Harris’s deterministic sword: what is there to thank in a deterministic system, mountain or human?”

 _______ ~.::[༒]::.~ _______

However, there is an even more fundamental issue left to discuss.

The physicalist’s claim that we should accept the social value of spreading belief in determinism is actually destroyed on an even more meaningfully deep level by the fact that if physicalism were true, it would be incoherent to say that our beliefs ever impact our behavior at allThe only paradigm that can even accommodate the notion that beliefs, as such, could possibly hold their own independent impact on our behavior is one that gives consciousness itself an independent causal role in behavior.

This is because, on physicalism, there are precisely three possible answers (or pseudo–answers) for explaining the relationship between my consciously held “belief” and whatever physical properties of my brain most closely correlate with changes in my consciously held “beliefs”: identity theory, epiphenomenalism, and eliminativism.

Eliminativism would say that there are, in fact, no such thing as “beliefs” at all—there are only physical systems linked up in such a way that when this one part moves this way, it causes that part to move that way in a sheer physical series of causative events. Recall the statement from Alex Rosenberg we explored the implications of in the entry on intentionality:

Suppose someone asks you, “What is the capital of France?” Into consciousness comes the thought that Paris is the capital of France. Consciousness tells you in no uncertain terms what the content of your thought is, what your thought is about. It’s about the statement that Paris is the capital of France. That’s the thought you are thinking. It just can’t be denied. You can’t be wrong about the content of your thought. You may be wrong about whether Paris is really the capital of France.

The French assembly could have moved the capital to Bordeaux this morning (they did it one morning in June 1940). You might even be wrong about whether you are thinking about Paris, confusing it momentarily with London. What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all. The brain can’t have thoughts about Paris, or about France, or about capitals, or about anything else for that matter. When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong.

Don’t misunderstand, no one denies that the brain receives, stores, and transmits information. But it can’t do these things in anything remotely like the way introspection tells us it does—by having thoughts about things. The way the brain deals with information is totally different from the way introspection tells us it does. Seeing why and understanding how the brain does the work that consciousness gets so wrong is the key to answering all the rest of the questions that keep us awake at night worrying over the mind, the self, the soul, the person.

We believe that Paris is the capital of France. So, somewhere in our brain is stored the proposition, the statement, the sentence, idea, notion, thought, or whatever, that Paris is the capital of France. It has to be inscribed, represented, recorded, registered, somehow encoded in neural connections, right? Somewhere in my brain there have to be dozens or hundreds or thousands or millions of neurons wired together to store the thought that Paris is the capital of France. Let’s call this wired-up network of neurons inside my head the “Paris neurons,” since they are about Paris, among other things. They are also about France, about being a capital city, and about the fact that Paris is the capital of France. But for simplicity’s sake let’s just focus on the fact that the thought is about Paris.

Now, here is the question we’ll try to answer: What makes the Paris neurons a set of neurons that is about Paris; what make them refer to Paris, to denote, name, point to, pick out Paris? To make it really clear what question is being asked here, let’s lay it out with mind-numbing explicitness: I am thinking about Paris right now, and I am in Sydney, Australia. So there are some neurons located at latitude 33.87 degrees south and longitude 151.21 degrees east (Sydney’s coordinates), and they are about a city on the other side of the globe, located at latitude 48.50 degrees north and 2.20 degrees east (Paris’s coordinates).

Let’s put it even more plainly: Here in Sydney there is a chunk or a clump of organic matter—a bit of wet stuff, gray porridge, brain cells, neurons wired together inside my skull. And there is another much bigger chunk of stuff 10,533 miles, or 16,951 kilometers, away from the first chunk of matter. This second chunk of stuff includes the Eiffel Tower, the Arc de Triomphe, Notre Dame, the Louvre Museum, and all the streets, parks, buildings, sewers, and metros around them. The first clump of matter, the bit of wet stuff in my brain, the Paris neurons, is about the second chunk of matter, the much greater quantity of diverse kinds of stuff that make up Paris. How can the first clump—the Paris neurons in my brain—be about, denote, refer to, name, represent, or otherwise point to the second clump—the agglomeration of Paris? A more general version of this question is this: How can one clump of stuff anywhere in the universe be “about” some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

But whether Rosenberg can incorporate it into his theory or not, that our thoughts are “about” concepts and ideas is the one thing we can’t deny. If the notion that the world is nothing but “chunks of matter” is a notion that can’t account for the fact that this is so, then it is that notion, and not our belief that we have thoughts “about” things, that must go. (Again, I elaborate on this further in entry V).

The next approach a physicalist might attempt is identity theory. For us to be able to differentiate an identity theory about beliefs with an eliminativist perspective, this perspective would have to grant that our thoughts and mental states are “about” things—but that they are also identical to certain chunks of matter, nonetheless.

The first problem with that style of approach is this: everything Rosenberg just said is true—he has correctly reasoned from his opening premises. If everything is just “chunks of matter”, then it is incoherent that one “chunk of matter” could be “about” some other “chunk of matter” in some other part of the universe. And as we also saw in the entry on intentionality, the project of “building” the intentionality of the conscious human mind out of any sort of proto–intentionality just fails; there’s no way, even in principle, to do it. You can’t cross that bridge by steps any more than you can cross the bridge from drawing on a two–dimensional canvas to creating a three–dimensional figure by a series of steps of lines drawn on that canvas—and you don’t have to spend eternity testing every possible pattern of lines to figure this out; if you pay attention closely enough, you should be able to see that this is impossible in principle. But it can help to draw a few case studies of what some of the attempts have looked like in order to gain a closer intuitive grasp on where the bridge is that can’t be crossed—as, again, we saw in entry V.

The second problem, which is ultimately just the first approached from the opposite side of the same gap, is one we can see with a thought experiment originally presented by Laurence BonJour. As he wrote:

Suppose then that on a particular occasion I am thinking about a certain species of animal, say dogs — not some specific dog, just dogs in general (but I mean domestic dogs, specifically, not dogs in the generic sense that includes wolves and coyotes). The Martian scientist is present and has his usual complete knowledge of my neurophysiological state. Can he tell on that basis alone what I am thinking about? Can he tell that I am thinking about dogs rather than about cats or radishes or typewriters or free will or nothing at all? It is surely far from obvious how he might do this. My suggestion is that he cannot, that no knowledge of the complexities of my neurophysiological state will enable him to pick out that specific content in the logically tight way required, and hence that physicalism is once again clearly shown to be false.

[. . .]

Suppose then, as seems undeniable, that when I am thinking about dogs, my state of mind has a definite internal or intrinsic albeit somewhat indeterminate content, perhaps roughly the idea of a medium-sized hairy animal of a distinctive shape, behaving in characteristic ways. Is there any plausible way in which, contrary to my earlier suggestion, the Martian scientist might come to know this content on the basis of his neurophysiological knowledge of me? As with the earlier instance of the argument, we may set aside issues that are here irrelevant (though they may well have an independent significance of their own) by supposing that the Martian scientist has an independent grasp of a conception of dogs that is essentially the same as mine, so that he is able to formulate to himself, as one possibility among many, that I am thinking about dogs, thus conceived. We may also suppose that he has isolated the particular neurophysiological state that either is or is correlated with my thought about dogs. Is there any way that he can get further than this?

The problem is essentially the same as before. The Martian will know a lot of structural facts about the state in question, together with causal and structural facts about its relations to other such states. But it is clear that the various ingredients of my conception of dogs (such as the ideas of hairiness, of barking, and so on) will not be explicitly present in the neurophysiological account, and extremely implausible to think that they will be definable on the basis of neurophysiological concepts. Thus, it would seem, there is no way that the neurophysiological account can logically compel the conclusion that I am thinking about dogs to the exclusion of other alternatives.

[. . .]

Thus the idea that the Martian scientist would be able to determine the intrinsic or internal contents of my thought on the basis of the structural relations between my neurophysiological states is extremely implausible, and I can think of no other approach to this issue that does any better. The indicated conclusion, once again, is that the physical account leaves out a fundamental aspect of our mental lives, and hence that physicalism is false.

As Bill Vallicella summarizes the argument,

BonJour is thinking about dogs. He needn’t be thinking about any particular dog; he might just be thinking about getting a dog, which of course does  not entail that there is some particular dog, Kramer say, that he is thinking about getting.   Indeed, one can think about getting a dog that is distinct from every dog presently in existence!  How?  By thinking about having a dog breeder do his thing.  If a woman tells her husband that she wants a baby, more likely than not, she is not telling him that she wants to kidnap or adopt some existing baby, but that she wants the two of them it engage in the sorts of conjugal activities that can be expected to cause a baby to exist.

BonJour’s thinking has intentional content. It exhibits that aboutness or of-ness that recent posts have been hammering away at.  The question is whether the Martian scientist can determine what that   content is by monitoring BonJour’s neural states during the period of time he is thinking about dogs. The content before BonJour’s mind has various subcontents: hairy critter, mammal, barking animal, man’s best  friend . . . . But none of this content will be discernible to the neuroscientist on the basis of complete knowledge of  the neural states, their relations to each other and to sensory input and behavioral output. Therefore, there is more to the mind than what can be known by even a completed neuroscience.

So whatever the relationship between ‘beliefs’ as I consciously experience them and the physical state of my brain might be—however close that relationship might be—it is just flatly incoherent to claim that the two things are “identical” (for even more on that, see here). We can see that whether we conceptually analyze what it means for something to be a belief, and then reason backwards to see whether something with those attributes could be built out of something possessing only the kinds of attributes that blind physical forces do (this is how Rosenberg arrives at the, er, belief that beliefs do not exist), or we approach the divide from the opposite direction and imagine ourselves looking into the physical dimensions of the activity of the brain in the attempt to find an ‘idea’.

And that leaves just one final option remaining for the physicalist: epiphenomenalism. But epiphenomenalism about beliefs fails for exactly the same reasons that epiphenomenalism about qualia does: namely, that if it were true, we would necessarily be utterly incapable in principle of forming the concept of epiphenomenalism in the first place. Recall our earlier description of why epiphenomenalism about qualia fails:

One of the easiest ways to explain an epiphenomenalist relationship is by example. If you stand in front of a mirror and jump up and down, your reflection is an epiphenomena of your actual body. What this means is that your body’s jump is what causes your reflection to appear to jump—your body’s jump is what causes your real body to fall—and your body’s fall is what causes your reflection to appear to fall. It may seem to be the case that your reflection’s apparent jump is what causes your reflection to apparently fall, but this is purely an illusion: your reflection doesn’t cause anything in this story; not even its own future states. If we represent physical states with capital letters, states of experience with lower–case letters, and causality with arrows, then a diagram would look something like this:


Thomas Huxley, not the first to espouse the view but the first to give it a name, described it by saying that consciousness is like the steam–whistle sound blowing off of a train that contributes nothing to the continued motion of the train itself. We shouldn’t fail to realize how extreme the dehumanization of this view is, even still, despite the fact that it acknowledges conscious experiences as real: if this is true, then nobody ever chooses a partner because they are experiencing love; nobody ever fights someone because they are experiencing anger; nobody ever even winces because they are experiencing pain. Rather, a blind inert physical state moves by causal necessity from one state to the next; and it is the meaningless motion of these blind inert forces by causal necessity that explains everything—conscious experiences just happen to incidentally squirt out over the top of these motions as a byproduct, and you are, in effect, a prisoner locked inside the movie in your head with your arms and legs removed and absolutely no influence or control whatsoever over what does or does not happen inside of it. In the words of Charles Bonnett writing in 1755, “the soul is a mere spectator of the movements of its body.”

I would ask you to contemplate the severity of what might result if someone were to actually take this proposal seriously and really honestly begin to look at life and their own conscious existence in this horrific and dehumanized way, but according to the claim of epiphenomenalism, believing that epiphenomenalism is true never has any causal effect on anyone’s physical behavior—nor on any of their future mental states—in the first place either. A series of blind, inert physical events leads to their brain responding physically to the input of symbols and lines (and it is only a mere epiphenomena of this that they have any experience of “understanding their meaning,” but any “ideas” contained therein—as such—would simply in principle have no ability to play any further causal role in anything further whatsoever, either of the individual’s future conscious beliefs or their future physical behavior); and from here a purely physical sequence of physical causation leads to further physical states (which then happen to give off more epiphenomena in turn). On this view, the fact that pain even feels painful” is a mere coincidence; for it is not because we feel pain and dislike it that we ever recoil away from a painful stimuli: one physical brain event produces another, and it is only a mere unexplained coincidence that what the first physical brain event happens to give off like so much irrelevant steam is a feeling that just so happens to be painful in particular. 

It literally could just as well have been the case that slicing into our skin with a knife would produce the sensation that we currently know in the world as it is as “the taste of strawberries,” and the physical world (according to epiphenomenalism) would proceed in just exactly the same way as it does now. This would be true because: (1) epiphenomenalism admits that conscious experiences are something over and above physical events, and we do not know why particular conscious experiences are linked with particular physical events (since the former are not logically predictable from the latter given that claims that it “emerges” are acknowledged by definition by epiphenomenalism to fail), and (2) none of them play any causal role in anything anyway. Our conscious lives could have consisted of one long feeling orgasm, or one long miserable experience of pain, or one long sounding “C” note combined with the taste of blueberries and a feeling of slight melancholy, and again, everything in the physical universe would have proceeded in exactly the same way it does now. And it is only a coincidence of whatever extra rule specifies that particular conscious experiences superfluously ‘squirt out’ and dissipate into the cosmic aether like steam that our world happens to be otherwise.

Unfortunately, while most people—including philosophers—are content to stop here and reject the view for sheer counter–intuitiveness alone, philosophy of mind has been somewhat lazy at producing actual logical objections to it. Actual refutations of epiphenomenalism often aren’t very well known, but there is one that is absolute and undeniable and refutes even the possibility that anything like epiphenomenalism could possibly be true completely once and for all. That is: if epiphenomenalism were true, no one would ever be able to write about it. In fact: no one would ever be able to write—nor think—about consciousness in general. No one would ever once in the history of universe have had a single thought about a single one of the questions posed by philosophy of mind. Not a single philosophical position on the nature of consciousness, epiphenomenalist or otherwise, would ever have been defined, believed, or defended by anyone. No one would even be able to think about the fact that conscious experiences exist.

And the reason for that, in retrospect, is quite plain to see: on epiphenomenalism, our thoughts are produced by our physical brains. But our physical brains, in and of themselves, are just machines—our conscious experiences exist, as it were in effect, within another realm, where they are blocked off from having any causal influence on anything whatsoever (even including the other mental states existing within their realm, because it is some physical state which determines every single one of those). But this means that our conscious experiences can never make any sort of causal contact with the brains which produce all our conscious thoughts in the first place. And thus, our brains would have absolutely no capacity to formulate any conception whatsoever of their existence—and since all conscious thoughts are created by brains, we would never experience any conscious thoughts about consciousness. For another diagram, if we represent causality with arrows, causal closure with parentheses, physical events with the letter P and experiences with the letter e, the world would look something like this:

… e1 ⇠ (((P⇆P))) ⇢ e2 …

Everything that happens within the physical world—illustrated by (((P⇆P)))—would be wholly and fully kept and contained within the physical world, where conscious experiences as such do not reside; the physical world is Thomas Huxley’s train which moves whether the whistle on top blows steam or not. And e1 and e2 float off of the physical world—for whatever reason—and then merely dissipate into nothingness like steam, with no capacity in principle for making any causal inroads back into the physical dimension of reality whatsoever. This follows straightforwardly as an inescapable conclusion of the very premises which epiphenomenalism defines itself by. But since the very brains which produce all our experienced thoughts are contained within (((P⇆P))), in order to have any experienced thought about conscious experience itself, these (per epiphenomenalism) would have to be the epiphenomenal byproducts of a brain state that is somehow reflective or indicative of conscious experience. But brain states, again because per epiphenomenalism they belong to the self–contained world inside (((P⇆P))) where no experiences as such exist, are absolutely incapable in principle of doing this.

To refer back to our original analogy whereby epiphenomenalism was described by the illustration of a person jumping up and down in front of a mirror, then: it would be as if the mirror our brains were jumping up and down in front of were shielded inside of a black hole in a hidden dimension we couldn’t see. Our real bodies [by analogy, our physical brains] would never be able to see anything happening inside that mirror. And therefore, they would never be able to think about it or talk about it. And therefore, we would never see our reflections [by analogy, our consciously experienced minds] thinking or talking about the existence of reflections, because our reflections could only do that if our real bodies were doing that, and there would be absolutely no way in principle that our real bodies ever could.

The fact that we do this, then—the fact that we do think about consciousness as such, and the fact that we write volumes and volumes and volumes and volumes philosophizing about it, and the very fact that we produce theories (including epiphenomenalism itself) about its relation to the physical world in the first place—proves absolutely that whatever the mechanism may be, conscious experiences somehow most absolutely do in fact have causal influence over the world. What we have here is a rare example of a refutation that proceeds solely from the premises of the position itself, and demonstrates an internal inconsistency.

But Jaegwon Kim has already identified all the possible options for us! Either experiences and physical events are just literally identical (which even Kim himself rejects, for good reasons we have outlined here), or else epiphenomenalism is true (which Jaegwon Kim accepts, but which the simple argument outlined just now renders completely inadmissible)—or else the postulate of the causal closure of the physical domain is false—and conscious experience is both irreducible to and incapable of being explained in terms of blind physical mechanisms, and possesses unique causal efficacy over reality all in its own right.

 What goes for the failure of epiphenomenalism about qualia goes just the same for epiphenomenalism about beliefs. It’s not just that epiphenomenalism would necessarily have to remove any causal role from the belief as such out of the picture; it’s that on any assumption of any world that worked that way, it would be impossible on principle for any of its inhabitants to ever form the very belief that their consciously held beliefs are outside of the causal nexus of the physical world—because all of the causally potent material brain events that squirt out these causally impotent consciously experienced “beliefs” would be happening inside of the causal nexus that consciously held beliefs, per se, can never in principle causally interact with because they are locked in principle outside of that nexus. Thus, we could never have any consciously experienced  beliefs about our consciously experienced beliefs (or about their relationship to the rest of reality) at all. But the very concept of epiphenomenalism is exactly just such a belief—which proves that our beliefs do have causal impacts on reality.

But since the physicalist approach of denying their existence utterly fails, and since the physicalist approach of calling them “identical to” the blind causal dispositions of some assembly of neurons also fails, there is no option left which is both (1) internally consistent, (2) accounts for all of the facts that any valid theory must account for, and (3) remains “physicalist” in any meaningful sense. The only way the physicalist can give causal efficacy to our consciously experienced beliefs is to say that they literally just are a certain set of brain events. But, as physicalists themselves (like Rosenberg) acknowledge, this would mean we have to eliminate from the picture everything that makes our thoughts and experiences what they actually are. And that is why some physicalists end up desperate enough to turn to a theory as blind and idiotic as eliminativism: eliminativism is, in fact, the end conclusion of the physicalist premises.

But it is also blatantly absurd. And not absurd like “Hey, did you know the ground beneath you is actually spinning through space really fast even though it feels solid and motionless and stable?”

Absurd like “Hey, did you know that colorless green ideas sleep furiously? This is not a sentence. You are not reading this. In fact, nobody ever reads anything at all.”

Hence, the very fact that our beliefs about free will and determinism—no matter what they are—have the capacity to impact our behavior actually turns out to be an inescapable refutation of the very physicalism which underlies the claim that determinism is the only option because free will isn’t possible within a physicalist universe (as, indeed, it wouldn’t be, if physicalism were true). And that leaves us with all the weight of direct subjective experience itself in favor of human possession of free will on the one side, and nothing on the other.

 _______ ~.::[༒]::.~ _______

My concluding comments will require a little more allowance of liberty from the reader than usual, as I will turn now from making logical arguments to explaining something about my own personal view—and so the standard to which my reasoning should be held from here is no longer “can I prove it?” but “does this internally hold together?”

I have argued elsewhere on this blog for the relevance of biological factors in predicting human behavior (for example, near the ending of this essay on the relationship between poverty, race, out–of–wedlock birth, and crime). Doesn’t that leave me with some explaining to do? How can there be free will and proof of genetic influence?

Actually, my view is the only one that can account for the meaningfulness of an idea like the insanity defense. Why is it that “insanity” should reduce a person’s punishment for a crime? What possible rationale is there for that?

In his own attempt to defend this principle, Sam Harris writes:

What does it really mean to take responsibility for an action? For instance, yesterday I went to the market; as it turns out, I was fully clothed, did not steal anything, and did not buy anchovies. To say that I was responsible for my behavior is simply to say that what I did was sufficiently in keeping with my thoughts, intentions, beliefs, and desires to be considered an extension of them. If, on the other hand, I had found myself standing in the market naked, intent upon stealing as many tins of anchovies as I could carry, this behavior would be totally out of character; I would feel that I was not in my right mind, or that I was otherwise not responsible for my actions.

I think most people would say that Harris is just plain wrong about whether the mere fact that behavior is “out of character” means that we do, or even should, judge that a person is therefore “not responsible for (their) actions.” The first time anyone commits a violent act of rape or murder, for example, their behavior is by definition “out of character”. Yet, this fact alone most certainly does not cause us to morally excuse all first–time offenders—nor should it.

The implicit idea behind the insanity defense is that there are some conditions in which a person has less control over their impulses than others, and is therefore less morally culpable for their actions. But if determinism were true, then the insanity defense would make no sense, because none of us would ever have any “control” over any of our impulses. Thus, all of us would qualify in the relevant sense as “insane”, all of the time—and the concept would never add any particular new meaning to any particular case; nothing would ever make this extra true in some peculiar circumstance, because it would already be as true as it can ever be, for everyone, all of the time. Hence, only if free will does exist can we contemplate situations in which it could be overridden, or reduced by varying degrees. “My brain made me do it” cannot be an exculpatory claim for the determinist—but it can for the believer in free will (if and when other facts support it).

In any case, my own view of free will in the relationship between the mind and the brain—simplified—goes something like this:

• (A) The conscious mind has the metaphysical capacity to choose between, and to inhibit, brain–based impulses (but exercising this capacity requires expenditure of a certain kind of probably limited “energy”).

• (B) Most of the time, the conscious mind is “in the driver’s seat”—but there are probably some unique circumstances in which it actually can get thrown out of that seat, thus rendering the driver proportionally less morally responsible for where the car ends up going in such unique cases.

• (C) Our biology essentially determines the impulses which we experience, and then possess the capacity to choose between, in the first place.

• (D) Empirical science has revealed that genetics plays a substantial role, far larger than most environmental inputs, in hardwiring the biology which in turn determines those impulses.

• (E) As a contingent fact, it is true that people usually decide to act on their impulses. But those impulses do not absolutely determine their ensuing actions most of the time.

The picture we get is one where the conscious mind is highly analogous to the “driver” of a vehicle, yes—but the vehicle is more like a boat than a car, and the fact that someone is holding the wheel doesn’t mean he possesses the power to drive the boat absolutely anywhere, at any time, without external constraints. On the contrary, whether the driver or the waves of the ocean are more influential in determining where the boat will go at any given point in time depends on various weather conditions and other circumstances which, themselves, are outside of the driver’s absolute control.

But barring more severe kinds of circumstances, someone who drives the boat well could thereby navigate to a part of the ocean where the waves will exert relatively less influence, and his driving skills therefore relatively more influence, over where he goes next.

And it has been increasingly validated by empirical science that belief in free will can help us to drive better—to the point that implicitly prompting someone to disbelieve in free will is even known to lower their reaction time. On the assumption that determinism is true, how is the determinist supposed to explain this? The proponent of free will can explain it easily: reminding someone that they have free will can prompt them to use it more, in just the same way that someone who has given up trying to drive a boat they can’t seem to maintain control of can benefit from a motivational speech reminding them of the fact that they can still get out of the storm that they’re in if they grab back onto the wheel and keep focusing their attention—because there is in fact a “driver” there who either may exercise that capacity, or may not.

And this is true even if at other times the implication that their driving was solely responsible for getting them into the storm in the first place can be further frustrating to them, when that implication happens to be false. But the problem in those cases is that it wasn’t the case—not that it couldn’t have been, or never is at all. Indeed, a neuroscientist who happens to be a dualist has had more success treating OCD than anyone so far operating under a materialist paradigm through methods that ask them to practice focusing their subjective mental attention as a means of ultimately rewiring the impulses which they experience—and while the materialist will of course simply hand–wave this away because changes in subjective conscious attention are to them just “chunks of matter” being rearranged anyway, it remains the case that were that so, it would be impossible in principle for consciously experienced events as such to have any sort of independent causal potency over physical brain events altogether.

In sum: The scientific studies from Benjamin Libet and those who followed his footsteps do nothing to refute the possibility of metaphysical free will. If the determinist wants to argue that determinism has any sort of social or psychological benefit, he’s going to have to deal with the problem that no version of physicalism seems to be able to account for the possibility that beliefs, as such could have independent causal efficacy of their own over the physical states of our brains in the first place (without running into other, absolutely insurmountable problems that have been detailed elsewhere throughout this series). But it turns out that research is coming to establish that belief in free will has far more benefits than belief in determinism, anyway—and the idea that we should tell people that free will is impossible, or false, while telling them that they should believe in it anyway is an obvious dead end. It may “only” be the evidence of direct subjective experience that stands in favor of the existence of free will—but nothing solid stands against it.

 _______ ~.::[༒]::.~ _______

[1] In the Harris excerpt I read, a mention of the Soon studies followed the break after this paragraph. He may have been referring to the studies of Haggard and Eimer in this part which preceded the break, but in any case, Soon’s is one of the most recent modern “replications” of this kind of finding.

New Music – Spektakel EP

New Music –The Primary Colors EP

Calling for a Nazi / Social Justice Warrior Alliance

Imagine a world where the following paragraph was true:

White people are just 2% of the population of South Africa.

And yet, a whopping 31% of South African media companies are owned by white people; 38% are founded by white people; 45% of their presidents are white people; and 47% of their chairmen are white people. 26% of all the reporters, editors, and executives of the major print and broadcast media are white people. 75% of the senior administrators of the best South African colleges are white people, and from 11 to 27 percent of students admitted to those colleges are white. 139 of the top 400 richest people in South Africa are white. Of the top 100 political campaign funders, at least 42 of them are white. 15 out of 30 executives at the major think tanks that determine policy are white. To top it all, 8 of 11 senior advisers to President Zuma are white.

The corollary of these statements is that Blacks are around 98% of the population, and yet make up only 69% of media company owners, 62% of their founders, and 55% of their presidents … Only 25% of senior administrators at the best colleges are black; and only 3 of 11 Presidential advisers are black.

What would leftists’ response to this situation be?

The answer to that question is beyond doubt: they’d be outraged.

And it wouldn’t matter in the slightest that whites were a minority of the South African population—that would just make their domination of the country’s most important offices worse.

In the United States, we have a group calling itself the ‘Reflective Democracy Campaign’ which finds that white men are 31% of the population—but 66% of those who run for political office, and 65% of those elected. Once these figures are produced, no further investigation is required before leftists start asking why it is that “in the year 2015, there are roughly double the number of white men in elected office as there ought to be[?]”  Another campaign strives to draw awareness to the fact that white men make up 79% of elected prosecutors.

Or to give another example, when Spike Lee thought black winners at the Oscars were underrepresented compared to white winners, he called for a boycottIt turns out he was wrong: a USC study found that blacks, who are about 13% of the U.S. population, comprise 12.5% of actors in the top 100 films from 2007; 23 of 192 Oscar nominations (12%), and 9 out of 68 academy awards since 2000 (13.2%)—close to perfect statistical representation. But the mere idea that whites might be overrepresented in the Oscars compared to blacks was all it took to set off a loud and persistent conversation, with many people instantly prepared to believe that whites are overrepresented and that this is a problem in need of urgent address.

So in the case of the Oscars, the over–representation of whites compared to blacks was exactly zero. And in the case of the Reflective Democracy Campaign’s argument, whites are overrepresented amongst political candidates at just 1.4 times their population rate (whites are 63% of the population, and a combined 89% of Republican and Democratic candidates), and amongst elected prosecutors at 1.25 times their population rate.

So we can absolutely rest assured that if our opening paragraphs were true, liberals would be outraged to find whites overrepresented at 5–36 times their rate of the population rather than a mere 1.2.

So what makes liberals different from white supremacists—besides their target?

Everything stated in the opening paragraph of this post is, in fact trueabout Jews. 

Jews are just 2% of the United States population. And yet, they make up 18 out of 24 senior administrators of Ivy League colleges (75%), 8 out of 11 senior advisors to President Obama (72%), 8 out of 20 Senate Committee chairmen (40%), 33 out of 51 senior executives of the major Wall Street banks, trade exchanges, and regulatory agencies (64%), 23 out of 40 senior executives of the major Wall Street mutual funds, private equity funds, hedge funds, and brokerages (57%), 41 out of 65 senior executives of the major newspapers and news magazines (63%), 43 out of 67 senior executives of the major television and radio news networks (64%), 15 out of 30 senior executives of the major think tanks (50%).[1]

New students admitted to Harvard University? 25% Jewish. Yale? 27% Jewish. Cornell? 23% Jewish.

And when Jewish organizations reflect on Jewish representation in Ivy League colleges, they do so not to worry about whether Jews are pushing non–Jews out through their own overrepresentation, but to analyze the puzzle that “Thirteen percent of Princeton’s undergraduate student body is Jewish, the lowest percentage of any Ivy League university besides Dartmouth, which comes in at 11 percent.” Yet, both of these are still more than 4 and 5 times the Jewish percentage of the population.

The media? If we’re looking at the CEOs of media companies, then they’re 31% of the total. If we’re looking at founders, then they’re 38%. If we’re looking at presidents, then they’re 45%. If we’re looking at chairmen, then they’re 47%. If we’re talking about the directors and writers, then Jews represent “26 percent of the reporters, editors, and executives of the major print and broadcast media, 59 percent of the directors, writ­ers, and producers of the 50 top-grossing motion pictures from 1965 to 1982, and 58 percent of directors, writers, and producers in two or more primetime television series”.

These numbers range from over 12 to over 22 times the Jewish percentage of the population.

Banking? Of the five Federal Reserve board governors (Daniel K. Tarullo, Jerome H. Powell, Lael Brainard1, Stanley Fischer2, Janet L. Yellen3), three are Jewish. Of the nine executive officers of Goldman Sachs (Edith W. Cooper, Gregory K. Palm, John F. W. Rogers, Alan M. Cohen1, Harvey M. Schwartz2, Mark Schwartz3, Gary D. Cohn4, Lloyd C. Blankfein5, Michael S. Sherwood6), six are Jewish. Of the ten operating committee members of JP Morgan Chase (John L. Donnelly, Gordon A. Smith, Jamie Dimon, Mary Callahan Erdoes, Matthew E. Zames1, Daniel E. Pinto2, Douglas B. Petno3, Marianne Lake4, Stacey Friedman5, Ashley Bacon6), six are Jewish. Combining just these three major banks, 62% are Jewish—almost 30 times the Jewish population rate.

“ … the Jews run everything? Well, we do. The Jews run all the banks? Well, we do. The Jews run the media? Well, we do … It’s a fact; this is not in debate. It’s a statistical fact … Jews run most of the banks; Jews completely dominate the media; Jews are vastly disproportionately represented in all of these professions. That’s just a fact. It’s not anti-Semitic to point out statistics … It’s not anti-Semitic to point out that these things are true.” — Milo Yiannopoulos, The Rubin Report, March 2016

So how can leftists, who immediately take any statistical over–representation of whites in anything at all as a major social problem that needs to be changed—even at just 1.1 or 1.4 times the white population rate—condemn white supremacists for being worried about statistical over–representations several times larger than that? Indeed, how are the racialist left and white supremacists anything but two different sides of the same coin?

Amusingly enough, a large percentage of my audience will probably suspect me immediately of having gone full Nazi just because I went through the effort to pinpoint exactly how overrepresented Jews are at all. Now, that suspicion may be fair—but if so, why is it that going through the effort to pinpoint how overrepresented whites are in various fields or professions is not seen as bigotry in just exactly the same way?

As a matter of fact, the ‘Reflective Democracy Campaign’ itself has apparently failed to notice that it is not “whites” who are overrepresented within the legal profession—it’s Jews, who in fact make up 26% of the nation’s law professors, and 30% of Supreme Court law clerks. In Jews and the New American Scene, Seymour Lipset and Earl Raab point out that Jews make up “40 percent of partners in the leading law firms in New York and Washington.” So Jews are overrepresented in the legal profession at 13 or more times their population rate.

And if you subtract the 26% of lawyers who are Jewish from the 79% of prosecutors the RDC calls “white”, that leaves only 53% of prosecutors who are non–Jewish whites, compared to about 61% of the U.S. population that is non–Jewish white. So it turns out that ‘whites’ are not overrepresented at all—they’re under–represented at about 0.86 times the population rate. But what would happen to the RDC’s left–wing credentials if it were to openly admit this and call explicitly for a reduction of the Jewish percentage of elected prosecutors?

Indeed, what would happen to their public image in general once this was known?

Suddenly, they’d go from being a respectable campaign calling attention to a real social issue to being classed with Nazis and white supremacists—the lowest of the low—just because the demographic their numbers targeted happened to turn out to be Jews instead of whites. But why is it that this kind of campaign is valid just so long as it targets whites, and racist bigotry the moment it hits any other demographic?

Why are Jews statistically overrepresented? There are essentially two possibilities:

  1.  Jews could be acquiring positions of power and then using them to grant favors to other Jews—say, Jews could take over the senior administrative positions in Ivy League colleges (where they indeed compose about 75% of the total), and then they could favor admitting Jews as new students over others.
  2. Perhaps Jews are simply more intelligent, or industrious, or intellectual, or otherwise have temperaments more conducive to these arenas—and so they acquire their status in these positions through legitimate success.

The first of these options is the white supremacist answer: Jews aren’t any more intelligent than the rest of us; they’re just more nepotistic, networking with other Jews to take over the world. In order to avoid sounding like bigots, then, we’re supposed to give the second answer: Jews are simply more intelligent or more industrious or more intellectual, or simply have temperaments more conducive to these arenas.

But if we’re talking about whites instead of Jews, then suddenly the first option is exactly what social justice warriors demand that we say: ‘whites aren’t any more intelligent than anyone else; they’re just more nepotistic’! Meanwhile, the second option is suddenly the one that is now inexcusably, irredeemably racist: if you claim that whites are simply more intelligent or more industrious or more intellectual, you’re a bigot.

What the ‘politically correct’ view requires us to say about Jews is exactly what it calls bigotry if we say it about whites. And what it requires us to say about whites is exactly what it calls bigotry if we say it about Jews. The disproportionate success of whites is purely the result of unjust ‘privilege’, and you’re a bigot if you think it has anything to do with greater merit. But the disproportionate success of Jews is the result of greater merit, and you’re a bigot if you try to diminish that by attributing it to ‘privilege’, much less want it to change!

The egregiousness of the naked double standard here is overwhelming. As far as resolving it, it would seem we have exactly two possible options: either we grant the argument in both cases, and encourage the social justice warriors and white supremacists to join forces against their new common foe—or else we deny it in both cases.

So which is it?

The ‘Poverty’ of Sociology

It’s obvious that there is, in general, a geographical correlation between poverty and crime. What I mean by that is that if we look at a map of the United States (or the world—but this post will focus on the United States) at any given point in time, in places where we see lots of poverty, we will also see lots of crime. 

This much is beyond serious question.

What is under–appreciated, however, is just how complicated it is to actually explain why. The obviousness of the geographical correlation between poverty and crime has led many to assume that it must be just as obvious that poverty “causes” crime. On the other hand, many social conservatives have argued that poverty and crime correlate with each other only because divorce and out–of–wedlock birth produce them both: according to this argument, single–parent families produce poverty because they earn less income than two–parent families do; and they produce crime because boys raised by single mothers have no models of masculinity to learn from and emulate, and therefore become more likely to attempt to express their masculinity through violence and affiliation with gangs.

Disentangling cause and effect in these relationships is more difficult than the proponents of either theory often assume—for even correlations that seem obvious at first glance can turn out to have causes that no one even considered. As we will see, both the “poverty causes crime” advocates and the “single parenthood causes both poverty and crime” advocates are, for the most part, wrong (though each is also about 1% correct).

A cautionary tale

By the mid–1990’s, hormone replacement therapy had become one of the most widely prescribed medications for women in North America. Books were published touting the benefits of synthetic hormones injections with titles like “Feminine Forever!” Several large studies (Stampfer 1991) found that even after controlling for other risk factors like age, “estrogen use is associated with a reduction in the incidence of coronary heart disease as well as in mortality from cardiovascular disease”. Another meta–analysis (Grady 1992) found a 35% reduction in heart disease amongst those using synthetic estrogen and concluded that “hormone therapy should probably be recommended for women … with coronary heart disease or at high risk for coronary heart disease.”

Yet, by the late 1990’s and early 2000’s, this consensus had fallen apart completely. Not only did it turn out not to be the case that hormone replacement therapy was beneficial for women with or at risk of heart disease (Rossouw et al. 2002), in many cases it actually turned out to increase the risk of heart disease (Hulley et al. 1998).

What happened?

Was the earlier research falsified? No.

The correlation between use of estrogen and lower heart disease risk found by earlier research did, in fact, exist.

It just didn’t get there because the use of estrogen causes a reduction of heart disease risk. It simply turned out to be the case that, on average, the women who were trying hormone replacement therapy were women of higher socioeconomic status who also tended to keep healthier diet, lifestyle, and exercise habits. Thus, the use of estrogen was increasing the risk of heart disease all along, despite the fact that it was true that women trying hormone replacement therapy did have lower heart disease rates on average than those who weren’t.

What we have here is an excellent example of a “hidden variable” explanation for a correlation. The original assumption behind the correlation between hormone replacement therapy (HRT) and lowered heart disease risk (–CHD risk) was that HRT caused –CHD risk. And this false assumption likely contributed to some uncertain number of unnecessary deaths. The real answer turned out to be that some other, previously unidentified factor (socioeconomic status increasing likelihood of both continued use of HRT and better lifestyle habits), was causing both.


To use a more commonplace example of a faulty inferences of correlation from causation, it is obviously true that we find fire burning only in the presence of oxygen. Wherever we see fire, then, we are bound to find oxygen. But this doesn’t make oxygen “the cause” of fire burning—indeed, since there are so many places where we can find oxygen but no fire, it is obvious that something else must be “the cause”.

Similarly, we are bound to find more human trafficking in places where there are more women who are vulnerable to being captured and exploited. But if anyone were to suggest that contact with a vulnerable woman is the literal “cause” of a man’s decision to kidnap and traffic her into sex slavery, the very same liberals who ask us to excuse crime while addressing its “root causes” would condemn this as “victim blaming” of the most horrendous and disgusting form. Yet, this seems like an arbitrary attempt to have one’s cake and eat it too—for the sake of consistency, we must either accept that human beings have (at least some degree of) free will, or else we must deny that and grant that all human behavior is the deterministic result of external circumstances across the board.

When analyzing these kinds of questions, we shouldn’t lose sight of just how banal much violent crime is.

Most aggressive crime just doesn’t look anything like a struggling family pocketing a loaf of bread after spending the rest of the grocery budget to feed their children. It looks like 39–year–old Ronald McNeil murdering a 19–year–old female college freshman, because of a fight with a different party attendant over the rules of beer pong. It looks like public gang rapes outside of rap concerts. It looks like randomly setting a 19–year–old girl on the way to get dinner on fire. But let’s not spend too much time on anecdotes before we move on to data.

Crime and poverty: does one cause the other?

First of all, it is absolutely, undeniably true that crime does help to cause poverty.

“A high crime rate will drive businesses out of a neighborhood. This eliminates both availability of products and services and a source of jobs. Further, those who do stay find it necessary to charge higher prices to offset losses due to thievery and higher costs of both security measures and insurance premiums—if insurance is available at all.

Property values are driven down by a smaller demand because of the greater difficulty potential purchasers have in obtaining mortgage loans.

The loss of productive activity by those who live by preying on others reduces the output of the area in which they live. Thus, crime injures economically both direct victims and others in the crime-ridden neighborhood.”

A more recent study calculated only the direct losses of victims; the price spent on police, prisons, and lawyers; and the opportunity costs for the perpetrator himself. It found that the average cost of each act of robbery is around $42,000; of each act of assault, more than $100,000; and of each act of murder, almost $9,000,000.

These estimates come without looking at the damage done to a community’s economy through crime’s impacts on third parties other than the perpetrator and his victim (the flight of businesses and thus opportunities away from high–crime areas, the raised price of insurance, the loss of property values, and so forth), and so they undoubtedly underestimate the true amount of damage caused by crime.

What about poverty causing crime? It is true that poverty and crime correlate geographically: in locations where we find more poverty, we are also going to find more crime. But it turns out that poverty and crime do not correlate very well historically: when poverty rises, we do not see concurrent rises in crime.

Both before and after the Great Depression, the relationship between poverty and crime actually appears to have inverted: “Most evidence suggests that the crime rate rose after World War I and the 1920s and that crime rates dropped as the nation sank into the Depression and continued to decline into the 1940s.” Eli Lehrer adds extra detail: “Crime rates fell about one third between 1934 and 1938 while the nation was struggling to emerge from the Great Depression and weathering another severe economic downturn in 1937 and 1938. Surely, if the economic theory held, crime should have been soaring.”

And as he continues, he explains that this same inverted relationship was also found during several other recessions over the past century, as well: “Crime rates rose every year between 1955 and 1972, even as the U.S. economy surged, with only a brief, mild recession in the early 1960s. By the time criminals took a breather in the early 1970s, crime rates had increased over 140 percent. Murder rates had risen about 70 percent, rapes more than doubled, and auto theft nearly tripled. … Crime rates fell in nearly all categories between 1982 and 1984, even though … wages fell for low-income workers during the same period. Likewise … wages rose for low-income workers between 1988 and 1990, despite being a period of higher crime rates. In fact, some of the worst years for crime increases were in the late 1950s, as hourly wages surged ahead. Between 1957 and 1958, for example, per–capita income increased about 8 percent while crime rose nearly 15 percent.”

Patrick F. Fagan adds: “What is true of the general population is also true of black Americans. For example, between 1950 and 1974 black income in Philadelphia almost doubled, and homicides more than doubled.” Similarly, poverty rates between different ethnic groups fail to explain their different crime rates today: in the 2006 American Community Survey, 21.5% of Hispanics lived in poor households and 37.2% of Hispanic men age 18–24 had not completed high school in 2005—compared with 25.3% of blacks and 26.3% of black men: in other words, 3.8% fewer Hispanics lived in poor households, but 10.9% more Hispanic men failed to graduate high school. If poverty were causing violent crime, then we would expect the violent crime rate to be similar amongst blacks and Hispanics. But that isn’t what we find; instead, the Hispanic crime rate is only slightly higher than the white crime rate, both of which are far lower than the black crime rate—even after controlling for age groups to account for the different proportions of young adult males (who commit the vast majority of crime) in each ethnic group.


from — for a lot more detailed charts and graphs on this topic, Random Critical Analysis has done plenty of heavy work in the Nov. 2015 post, “Racial differences in homicide rates are poorly explained by economics.”


And what holds historically about the association between poverty and crime continues to hold into the present day, with the discovery that the “Great Recession” of 2007–2009 came with a reduction in crime, too.

Writing in the Wall Street Journal, James Q. Wilson explained: “As the national unemployment rate doubled from around 5% to nearly 10%, the property-crime rate, far from spiking, fell significantly. For 2009, the Federal Bureau of Investigation reported an 8% drop in the nationwide robbery rate and a 17% reduction in the auto-theft rate from the previous year. Big-city reports show the same thing. Between 2008 and 2010, New York City experienced a 4% decline in the robbery rate and a 10% fall in the burglary rate. Boston, Chicago and Los Angeles witnessed similar declines. … In 2008, … even as crime was falling, only about half of men aged 16 to 24 (who are disproportionately likely to commit crimes) were in the labor force, down from over two-thirds in 1988, and a comparable decline took place among African-American men (who are also disproportionately likely to commit crimes).”

Heather MacDonald supplies additional data: “[B]y the end of 2009, the purported association between economic hardship and crime was in shambles. According to the FBI’s Uniform Crime Reports, homicide dropped 10% nationwide in the first six months of 2009; violent crime dropped 4.4% and property crime dropped 6.1%. Car thefts are down nearly 19%. The crime plunge is sharpest in many areas that have been hit the hardest by the housing collapse. Unemployment in California is 12.3%, but homicides in Los Angeles County, the Los Angeles Times reported recently, dropped 25% over the course of 2009. Car thefts there are down nearly 20%.”

Okay, so what if all of these hard statistical measures of the economy are too crude to capture what really matters for someone’s likelihood to commit a crime—how they perceive the economy as doing, regardless of the facts? Well, that brings us back to James Q. Wilson: “the University of Michigan’s Consumer Sentiment Index offers another way to assess the link between the economy and crime. This measure rests on thousands of interviews asking people how their financial situations have changed over the last year, how they think the economy will do during the next year, and about their plans for buying durable goods. The index measures the way people feel, rather than the objective conditions they face. It has proved to be a very good predictor of stock-market behavior and, for a while, of the crime rate, which tended to climb when people lost confidence. When the index collapsed in 2009 and 2010, the stock market predictably went down with it—but this time, the crime rate went down, too.”

Steven D. Levitt’s Understanding Why Crime Fell in the 1990s: Four Factors that Explain the Decline and Six that Do Not summarizes the research: “Empirical estimates of the impact of macroeconomic variables on crime have been generally consistent across studies: Freeman (1995) surveys earlier research, and more recent studies include Machin and Meghir (2000), Gould, Weinberg and Mustard (1997), Donohue and Levitt (2001) and Raphael and Winter-Ebmer (2001). Controlling for other factors, almost all of these studies report a statistically signiŽficant but substantively small relationship between unemployment rates and property crime. A typical estimate would be that a one percentage point increase in the unemployment rate is associated with a one percent increase in property crime.”

He concludes: “Based on these estimates, the observed 2 percentage point decline in the U.S unemployment rate between 1991 and 2001 can explain an estimated 2 percent decline in property crime (out of an observed drop of almost 30 percent)….” But yet again, even here, the direction of causation isn’t clear. Levitt misspeaks when he says the conclusion is warranted by this evidence that the decline in the unemployment rate “can explain” the decline in property crime; because the word “explain” invokes causation, and what this data shows us still isn’t causation yet.

How do we know it’s the unemployment rate that “explains” the decline in property crime? How do we know it isn’t the decline in property crime that explains the decline in the unemployment rate? If someone decides not to commit a home robbery, he obviously has a much better chance of finding a job in the near future than if he does. And in all likelihood, a business in a town with fewer property crimes is making more sales and therefore able to employ more people; more people are considering starting businesses; and more established businesses are considering moving in. At the very least, this effect must contribute to the correlation; and that means that a 1% decline in the unemployment rate must cause somewhat less than a 1% decline in the property crime rate.

So even with property crime, fluctuations in the economy don’t explain much at all. But as Levitt continues the summary, he explains: “Violent crime does not vary systematically with the unemployment rate.” What if the unemployment rate isn’t the right measurement of the economy? “Studies that have used other measures of macroeconomic performance like wages of low-income workers come to similar conclusions (Machin and Meghir, 2000; 170 Journal of Economic Perspectives Gould, Weinberg and Mustard, 1997).” Now, more astute readers may wonder why, if the hypothesis in the last paragraph about property crimes causing unemployment were plausible, violent crimes wouldn’t have the same effect. A possible answer is that generally speaking, far more of the kinds of people who would contemplate committing property crimes are potentially employable to begin with, whereas comparatively far more of the kinds of people who would contemplate committing violent rapes or murders already exhibit a demeanor or engage in other behaviors that make them less employable anyway.

Yet another point that demolishes the left–wing narrative: white–collar crime.

Isn’t it the left telling us that it’s the rich who are causing all of the real problems in the world in the first place? Aren’t they the ones telling us that it’s the rich white men running the world who are destroying the environment, lying to the public, committing embezzlement and collusion and fraud, fighting for policies that hurt the poor, and invading foreign countries to kill thousands of innocent people for no reason other than selfish gain?

Doesn’t that, in and of itself, contradict the notion that poverty “causes” crime?

Doesn’t that, in and of itself, prove that even liberals don’t actually believe that raising everyone’s economic welfare is all it takes to put an end to anti–social behavior and make people be nice to each other?

White–collar crime is interesting because of the way that it exposes the contradictory hole in the center of this set of beliefs, but it is also interesting for another reason, as well: it shows, once again, that whatever makes different demographics commit crimes at different rates, poverty isn’t a good explanation—because the disparities in crime that exist on the street actually turns out to exist in corporate offices as well.

Obviously, white people do commit the majority of white–collar crimes; and the harm that can come from these acts shouldn’t be understated. The savings and loans scandal of the 1980s was almost exclusively committed by white people, and cost U.S. taxpayers over $470 billion—more than all the conventional bank robberies in U.S. history combined. White people have “disproportionately” achieved positions of economic power and influence, and the damage that people can do in these positions substantially outweighs what any number of street criminals are capable of. However, what the data reveals is that white people in these positions are nonetheless proportionally underrepresented amongst white–collar criminals—in other words, whites are a percentage larger than their share of the population of those in corporate positions, but they still commit a percentage less than their share of the “corporate population” of all percentage of white–collar crimes. While ~99% of anti–trust and security fraud offenses are committed by whites because they’re effectively the only ones in positions to, non–whites are nonetheless found to be overrepresented in all the other corporate crimes they are in positions to commit.

These findings led Hirschi and Gottfredson to conclude in The Causes of White–Collar Crime that “When opportunity is taken into account, demographic differences in white collar crime are the same as demographic differences in ordinary crime.” But they weren’t, of course, referring solely to race: men are disproportionately likely to commit white–collar crimes relative to women, even once opportunity is taken into account, as well. In fact, men were found to be even more disproportionately overrepresented in white collar crime than they are in street crime. Likewise, the commission of white–collar crimes peaks around age 20, and falls in half by around the age of 40, and once again, this exactly fits the pattern of all other crimes. Whatever it is that causes men to commit more crimes than women, whatever it is that causes the young to commit more crimes than the old, and whatever it is that causes some ethnic groups to commit more crimes than others, it doesn’t look like poverty can be the explanation.

In 2014 came the final nail in the coffin to the “poverty causes crime” thesis. A Swedish study conducted by Amir Sariaslan was published which—for the first time—tested directly whether growing up in poverty directly contributes to crime, or whether there are other factors about the kinds of families which tend to end up poor which also cause them to breed crime. What made Sariaslan’s study uniquely insightful was the decision to take families which rose out of poverty, and compare the lives of children born and raised within those families before their rise from poverty with the lives of children born and raised within those same families after their rise from poverty.

The conclusion his research came to? “There were no associations between childhood family income and subsequent violent criminality and substance misuse once we had adjusted for unobserved familial risk factors.” Sariaslan’s study, in other words, had proven that growing up in poverty is not what creates one’s adult likelihood of committing violent crimes. Children who grow up in previously–poor families have exactly the same likelihood of committing crimes as children who actually grow up poor. The only conclusion we can soundly come to is that something else about poor families other than poverty itself must explain why their children go on to commit crimes.

Many conservatives think the root of social dysfunction is a lack of monogamy. 

Criminologist Anthony Walsh writes in Race and Crime: A Biosocial Analysis, for example:

“If racism were the culprit behind the difference in poverty rates, we would expect black families, regardless of their household composition, to be worse off than white families, regardless of their household composition. But this is not what we observe. The U.S. Census Bureau’s (McKinnon & Humes, 2000) breakdown of family types by race and income showed that non-Hispanic white single-parent households were more than twice as likely as black two-parent households to have an annual income of less than $25,000 (46% versus 20.8%). To state it in reverse, a black two-parent family is less than half as likely to be poor as a white single parent family. These figures constitute powerful evidence against the thesis that black poverty is the result of white racism, as well as powerful evidence that high rates of single-parenting is a major cause of family poverty for all racial/ethnic groups. The prevalence of single-parent families is so high in the black community that: “[A] majority of black children are now virtually assured of growing up in poverty, in large part because of their family status” (Ellwood & Crane, 1990:81).”

However, a study by Sara McLanahan found that “The dropout risk is 37 percent for those with never-married mothers and 31 percent for those with divorced parents, in contrast with the 13 percent risk of those from families with no disruption. Significantly, the risk for children who lost a parent to death is 15 percent—virtually the same as that for children from intact homes. Clearly, children of a widowed mother enjoy economic and other advantages over their peers from households headed by divorced or never-married parents.”

Emphasis mine. What are these “other” advantages?

The only truly plausible candidate for an answer is genes.

Commenting on these findings, Razib Khan (graduate student in genomics at UC Davis) writes:

“The null hypothesis which the media and the public intellectual complex sell us is that destabilized households lead to late life destabilization in individuals. What this misses is that destabilized individuals lead to destabilized households, and destabilized individuals also produce other destabilized individuals. In other words, one reason that kids whose parents didn’t stay together and are messed up is because they have the same crappy dispositions as their parents. They share genes with their parents.

This isn’t to deny that all things equal being in an intact nuclear family is preferable to being raised by a single parent. Ask anyone who grew up in a situation where they lost one of their parents to cancer or some such thing. But naive assumptions that simply increasing the marriage rate will reverse social dysfunction are going to be dashed against the reality that putting together explosive impulsive people under the same roof is not going to turn them into Ward and June Cleaver.

If behavioral genetics or the idea of heritability is new to you, one of the best introductions to the basics can be found in Brian Boutwell’s article at Quillette, “Why parenting may not matter and why most social science research is probably wrong”; as well as the follow–up, “Heritability, and Why Parents (But Not Parenting) Matter”. The twin studies, adoption studies, and family studies that these conclusions are based on have been challenged for years, and they have stood up to all of these challenges remarkably well. One problem with any attempt to critique their validity is the odd fact that they all tend to converge on the same exact estimates of how heritable various traits are: if all of them are flawed in different ways, how is it that they all consistently land on the same results? It’s like when young earth creationists critique the validity of carbon dating—do you really think it’s just sheer coincidence that carbon dating and helioseismic dating converge on exactly the same estimates for the Earth’s age? I’ll be addressing more general background on twin, adoption, and family studies as well as the critiques that have been made of them in the future. For now, I’m going to take their validity for granted and simply discuss what the research has shown.

In men, studies find that anywhere between 40% to 60% of the likelihood of divorce is the result of “genetic factors affecting personality.” More generally, a person’s “sociosexual orientation” is clearly found to be very highly heritable. Individuals are classified on this scale as either “sociosexually restricted” or “sociosexually unrestricted”. An ordinary person might simply call them “chaste” or “promiscuous”: so–called “unrestricted” individuals are more likely to engage in sex earlier in relationships, engage in sex with more than one partner at a time, seek sex for its own sake, and engage in it in relationships involving less love, dependency, and commitment.

Twin, adoption, and family studies are able to separate the role of heredity, “shared environment” (which essentially means “parenting”), and “non–shared environment” (which essentially means everything else) in the development of various behavioral and personality traits. The conservative argument about monogamy is severely damaged not just by the fact that divorce and sociosexuality have such a large genetic component, but by the fact that all indications so far reveal almost zero effect on these traits from one’s parenting, even once the influence of genes is taken out of the picture: what’s left over after genes are accounted for falls almost entirely into “non–shared environment”—a category which roughly means “we don’t know what it is, but it isn’t genes or parenting.”

As one of the studies quoted in the last paragraph states in its conclusion, “Consistent with genetic theory, familial resemblance [in sociosexuality] appeared primarily due to additive genetic rather than shared environmental factors.”

Shared environmental factors: that means parenting.

Another study compared children who experience their biological parents’ divorce with children who experience their adoptive parents’ divorce, and found that “adopted children who experienced their (adoptive) parents’ divorces exhibited elevated levels of behavioral problems and substance use compared with adoptees whose parents did not separate, but there were no differences on achievement and social competence.” While some behavioral problems (but not others) do result from experiencing one’s adoptive parents’ divorce, it isn’t the experience of divorce (or growing up in a single parent family) that molds a child’s core personality. The illusion that this is so happens because in most cases, a child both undergoes the experience of divorce and inherits his genes from the divorcing parents. But this illusion becomes untangled when adoptive children experience their adoptive parents’ divorce—some short term behavioral problems result, but not others; and most importantly, these behavioral changes do not appear to last the same way that they do in ordinary cases where a child undergoes a biological parents’ divorce.

Yet another study found that once the criminal behavior of single parents was actually controlled for, the association between single parent families and crime disappeared entirely. So the offspring of single parents are more criminal because their parents tend to be criminal.  And clearly, if being raised by one criminal parent produces poor outcomes for children, then being raised by two of them can’t be much better.

So the evidence suggests that the correlation between poverty and crime is taken care of by “unobserved familial risk factors”—but it also establishes very clearly that, in general, the individuals within families are similar in the ways that they are in large part because of their shared genes, and very specifically not because of their shared upbringing. And it proves this in the specific cases even of divorce and sociosexuality. Thus, poverty and crime can’t correlate with each other because each is the causal result of broken homes. Poverty, crime, and out–of–wedlock birth therefore must correlate with each other to the extent that they do because all three are the result of other things that tend to cause all three. But the only causes consistently found so far are genetic—and most of what isn’t genetic, as far as we’re able to tell, is simply random (again, for more, see Jayman’s Blog).

Of course, the correlation between single–parent families and crime actually has become weak, though it may have appeared stronger when the theory first originated. While it’s true that both crime and single parenthood rose together from around 1960 to 1990, this relationship decoupled during the massive crime decline of the 1990s—when crime fell tremendously even as single parenthood continued its decades–long gradual rise.



Now, many liberal commentators (like biological anthropologist Greg Laden) were too quick to pick up on the above chart as proof that there is no relationship between single parenthood and violent crime. To see why more evidence is needed before we can reach that conclusion, picture a chart with an x–axis titled “how long my stopped faucet has been running” that starts at 12:00pm and ends at 1:00pm, and a y–axis titled “how much water is spraying out towards my floor”—measured by quantifying the amount of water actually landing on my floor. My faucet stays on for the full duration of the whole hour, but around 12:30pm the relationship stops being linear because, suddenly, the amount of water on my floor decreases. Does this chart refute the notion that, all else equal, keeping my stopped up faucet running increases the amount of water spraying towards my floor?

Of course not. If the relationship decouples, we can’t immediately assume that keeping the faucet on was never increasing the amount of water spraying towards my floor. Maybe what happened is that around 12:30pm, I became more diligent at stopping it on its way towards my floor before it actually got there—say, because I put down buckets and I mopped up the floor with towels. If changes in how we tackle violent crime once it is already in existence took place during the 90’s, then perhaps we just became more efficient at fighting the crime that single parenthood was helping create. And in fact, something like this did happen: in 1972, only 158 out of 100,000 people were in prison or jail; by 1991, that doubled to about 311 out of 100,000 people.

So perhaps what this chart proves is only that the criminal justice system, by becoming more aggressive, also became much more effective at reducing crime that was being produced by, amongst other things, single parent families. Liberal commentators like Greg Laden are being profoundly dishonest when they wag their fingers at the other aisle without first considering the possibility. Unlike the relationship between poverty and crime, there is at least a long stretch of time during which the two variables rise together. And unlike the relationship between poverty and crime, we don’t repeatedly see shifts in which poverty goes down (or up) and yet crime goes up (or down).

However, the best controlled analysis shows that, just like the relationship between poverty and crime, there is only a tiny relationship left over once other factors have been controlled for. A 2009 meta–analysis of previous meta–analyses which looked at individuals actually found that less than 1% of the population’s variation in criminality could be explained by family structure (although studies which looked at different world regions did find much higher geographic correlations between single parenthood in crime—in other words, in places where we find lots of single parents, we’ll also find lots of crime. The fact that we find a strong ‘geographic’ correlation combined with a poor ‘historical’ correlation supports the claim that what correlation does exist exists only because of some other “hidden variables” that tend to come together, but don’t come together necessarily).

Single–parenthood is proposed as an explanation of criminality, first and foremost, because rates of both single–parenthood and criminal behavior are higher in black populations. Yet, it is clear now that changes in single–parenthood rates do not actually correlate well with changes in rates of crime. So it turns out that high rates of single parenthood in black communities can’t explain why crime rates tend to be higher in these areas, either.

Surprisingly, of the poorest ten counties in the United States, none contain black majorities. Most of these are either Indian reservations (like Ziebach County, South Dakota, which is ~72% Native American with ~62% of the population in poverty), or Appalachian states with large white majorities (like Owsley County, Kentucky, ~99% white with an annual median household income under $22,000). Despite the poverty rates across this second group of poor counties, however: “There’s a great deal of drug use, welfare fraud, and the like, but the overall crime rate throughout Appalachia is about two-thirds the national average, and the rate of violent crime is half the national average, according to the National Criminal Justice Reference Service.”

However, the population density of Owsley County is just 24 people per square mile.

In contrast, Chicago has a population density of over 11,000 people per square mile.

In his “Reflections on the Politics of Crime”, Tim Wise emphasizes a few citations which suggest that “concentrated poverty” (high population density in poor neighborhoods) is the real key to the link between poverty and violence: most poor whites live in places that are less poor overall than the places most poor blacks live (in other words, they live closer to more wealthier people); most poor blacks live close to many other poor blacks.

But why should living closer to other poor people increase the likelihood of a poor person committing a violent crime? On the face of it, this seems like a rather ad hoc attempt at explanation: had we found that living in richer areas increases violent crime amongst the poor, it would have seemed just as natural to suppose that living in proximity to richer people both increases the relative indignity of being poor while surrounded by wealth, and increases the opportunities those poor persons have to commit crimes ‘worth’ committing. 

In Crime & Human NatureWilson and Hernnstein discuss a community that had high poverty and high population density and faced large amounts of racial discrimination, without concurrent high crime rates: “During the 1960s, one neighborhood in San Francisco had the lowest income, the highest unemployment rate, the highest proportion of families with incomes under $4,000 per year, the least educational attainment, the highest tuberculosis rate, and the highest proportion of substandard housing of any area of the city. That neighborhood was called Chinatown. Yet in 1965, there were only five persons of Chinese ancestry committed to prison in the entire state of California.

The low rates of crime among Orientals living in the United States was once a frequent topic of social science investigation. The theme of many of the reports that emerged was that crime rates were low not in spite of ghetto life but because of it. Though Orientals were the object of racist opinion and legislation, they were thought to have low crime rates because they lived in cohesive, isolated communities. The Chinese were for many years denied access to the public schools of California, not allowed to testify against whites in trials, and made the object of discriminatory taxation. The Japanese faced not only these barriers but in addition were “relocated” from their homes during World War II and sent to camps in the desert on the suspicion that some of them might have become spies or saboteurs.

There was crime enough in the nineteenth– and early–twentieth–century Oriental communities of California, but not in proportion to the Oriental fraction of the whole population. The arrest rate of Chinese and Japanese was higher in San Francisco than in any other California city during the 1920s, but even so Orientals were underrepresented by a factor of two, the Japanese more so than the Chinese. … What is striking is that the argument used by social scientists to explain low crime rates among Orientals—namely, being separate from the larger society—has been the same as the argument used to explain high rates among blacks. The experience of the Chinese and Japanese suggests that social isolation, substandard living conditions, and general poverty are not invariably associated with high rates of crime among racially distinct groups.”

So Tim Wise’s explanation really is deeply ad hoc and therefore fails, as well. Why would concentrated poverty lead to higher crime rates amongst blacks, but not amongst Asians? The answer must lie somewhere else.

In any case, there’s a detour worth taking here. One of Wise’s key citations is an essay by Johnson and Chanhatasilpa in Darnell Hawkins’ 2003 anthology, Violent Crime: Assessing Race and Ethnic Differences, and the mechanism by which they propose that concentrated poverty leads to crime is interesting. Pay close attention.

They open with a summary of previous research: “A community that shows collective and reciprocal willingness to combat crime and disorder (“you watch my back and I’ll watch yours”) will be far less likely than its spatial counterparts to experience crime …. social networks are the foundation of informal controls because they facilitate collective action through networks of friendship and kinship ties….” They introduce and define the term “community control” as “the capacity of communities to wield social control”, and they state the hypothesis that “structural disadvantages [such as concentrated poverty] increase homicide rates in communities through their deleterious impact on community control….” So how do they measure “community control”? They create their measurement out of three different things: “(1) the percentage of owner occupied housing units; (2) the rate of residential stability; and (3) the percentage of children living in husband–wife households.” (p.96)

Hold on just a second. The irony here is actually hilarious.

Tim Wise is on the record as attacking the notion that out–of–wedlock births play any part in social dysfunction in the black community because, as he explains, the actual birth rate amongst unmarried black women has fallen—it’s just fallen faster amongst married black women. And that means the percentage of births out–of–wedlock has risen, even though the actual number of births out–of–wedlock hasn’t. He’s right, but here is why that is still actually an idiotic objection: if the black community is becoming increasingly dysfunctional, what that means is that there are a greater percentage of dysfunctional individuals within the black community than there were before. And if it were true that single–parent families produced dysfunction, then a higher percentage of births to single parents absolutely would explain why the black community today is more dysfunctional, whether the absolute numbers fell or not. A smaller but more dysfunctional black community would still be a more dysfunctional black community.

The claim that out–of–wedlock birth is responsible for crime is, as we’ve seen, generally (though not completely) false. Tim Wise may object to it on the basis of an absurd fallacy that can be dispensed with in a single paragraph; but he does object to it—and yet his key citation for the claim that “concentrated poverty” is the real cause of crime actually argues that it does so, in large part … by increasing the percentage of out–of–wedlock births.  Did he not read far enough to notice that, or did he decide not to mention it to his audience on purpose?

Well, that strikes down one out of three measurements Johnson and Chanhatasilpa used in their essay—and Tim Wise would even presumably agree with me that the correlation between out–of–wedlock birth and crime is insufficient to prove that the former is cause of the latter. Further, we have overwhelmingly good reasons from elsewhere (twin studies, adoption studies, comparison of the children of divorced parents with children whose adoptive parents divorce) to conclude that it isn’t. It should be clear enough that a correlation between residential stability or home ownership and crime raises just exactly the same kinds of issues. Criminals are likely to be bad residents, and not only are bad residents far more likely to get themselves thrown out of their apartments, but non–criminals are likely to want to move away from them as well. Both of these effects would contribute to low rates of “residential stability”. What evidence can they provide that the effect of residential instability causing crime is stronger than the effect of crime causing residential instability? So far as I can tell, they have none.

And that brings us back to the research of Amir Sariaslan. The 2014 study previously mentioned controlled for familial confounding in the association between childhood income levels and adult criminality and substance abuse, and found that the association disappeared completely. A 2013 study conducted by Sariaslan and a similar team did the same thing for neighborhood deprivation and young adult criminality and substance abuse. Once again, the team found that when they “adjusted for unobserved familial confounders, the effect was no longer present…. Similar results were observed for substance misuse. … the adverse effect of neighbourhood deprivation on adolescent violent criminality and substance misuse in Sweden was not consistent with a causal inference. Instead, our findings highlight the need to control for familial confounding in multilevel studies of criminality and substance misuse.”

In other words, criminal behavior runs in families. And the association between poverty in childhood or in neighborhoods and crime disappears completely once this is controlled for. The vast majority of research on these questions in social science has simply ignored this and failed to control for familial confounding entirely.

So is the problem with criminal families genes or parenting? The real answer, at last.

Just as we described earlier that twin studies, adoption studies, and family studies all support the idea that the risk of divorce, and promiscuity in general, are heavily influenced by genes but influenced almost not at all by parenting, so the same thing goes for criminality. Biological children of criminals adopted into non–criminal adoptive homes have approximately the risk of becoming criminals as children born to criminal parents in general do, rather than the risk of becoming criminals that children raised by non–criminal parents in general do. And when we calculate how much more likely an identical twin is to have a criminal status similar to their twins’ and we compare that to the likelihood that a fraternal twin will have a criminal status similar to their twins’, not only do we get numbers that line up with exactly what we would expect if there were a genetic component at play, we get estimates of the heritability of criminal tendencies that lines up exactly with what was already being found by the adoption studies. And so on.

The truly important point here is this: we know that violent and criminal behavior are heritable, regardless of how extensive our knowledge is of what the particular genes are or how they make their contribution to criminality. We know this by the same means we know that everything else we know to be heritable is heritable: by studying whether adopted children become more like their adoptive or biological parents as they grow into adults, by measuring how much more similar identical twins are on given traits than fraternal twins, by measuring how similar identical twins who were raised apart are compared to random members of the population, and so on.

The twin studies find that: “Genetic factors, but not the common environment, significantly influenced whether subjects were ever arrested after age 15, whether subjects were arrested more than once after age 15, and later criminal behaviour. The common environment, but not genetic factors, significantly influenced early criminal behaviour. The environment shared by the twins has an important influence on criminality while the twins are in that environment, but the shared environmental influence does not persist after the individual has left that environment.” What this means is that while being raised by criminal parents might make a child more likely to commit a criminal action as a very young teen, it has zero impact on a child’s likelihood of becoming a criminal as a (young) adult. Meanwhile, exemplary adoption studies find that adoptive children with criminal biological mothers have a 50% chance of later criminal behavior, compared to just 5% for the adopted children of non–criminals. Again, some of the best introductions can be found at Quillette: How Criminologists Who Study Biology Are Shunned By Their Field, and Criminology’s Wonderland: Why (Almost) Everything You Know About Crime is Wrong.

It’s not necessarily clear just what is being inherited when criminal tendencies are passed on, and we’re far from any complete knowledge of the range of genes involved. However, science is increasingly closing in on the answers.

We have identified a variety of genes that influence biological features which we know to play a role in criminal behavior. We also know that some of these genes are present in different ethnic groups in almost exactly the proportions at which these populations are represented in violent crime (higher in blacks, and lower in Asians).

In 1993, we discovered a condition now known as Brunner syndrome. Brunner syndrome was first identified in a single Dutch family, all of whom were found to react to perceived provocation with extreme aggression; 5 were arsonists, 5 had been convicted of rape and/or murder. It turned out that all 14 males originally studied had a mutation that caused the complete eradication of an enzyme called MAOA, which is responsible for breaking down neurotransmitters inside of the brain, including dopamine and adrenaline. Other research soon confirmed that you could even knock this same gene out in mice and produce the similar kinds of aggression.

While Brunner’s syndrome is incredibly rare, with just three families across the world now known to contain victims of the disease, the rest of the human population has genes coding for either low, medium, or high levels of MAOA activity (either the 2–repeat, 3–repeat, or 4–repeat alleles, respectively). [Note: the established convention is to use the term “MAOA–L” to refer to either the 2–repeat or the 3–repeat genes, but by grouping the “low” and “medium” activities together, this convention underscores just how significant the difference between all three really is.]

Early research found that people with low–activity MAOA genes were more violent if they had difficult upbringings—but as the research continued, it confirmed that people with low–activity MAOA genes were indeed significantly more violent regardless of their childhood experiences. Other research, in fact, continued linking the same gene to things like credit card debt and even obesity—all behaviors which revolve around impulsiveness.

The 2–repeat version of the gene was found to double the risk of violent deliquency in young adulthood compared to the other two variants. And guess what? The 2R allele is found in “5.5% of Black men, 0.1% of Caucasian men, and 0.00067% of Asian men”—which just so happens to correspond eerily to ethnic rates of violent crime. And lest anyone worry that low activity MAOA genes merely correlates with violence because black Americans are more violent and also just coincidentally happen to have more of them, other research has looked at black Americans with and without low activity genes and still found substantially more violence in 2R carriers.

(For rebuttal of common criticisms of MAOA studies, see the archives of The Unsilenced Science).

Similarly, the potential “triggers” for someone with low–activity MAOA genes turning violent (particularly carriers of the 3–repeat, which is somewhat less associated with violence on its own) expanded to include testosterone—and testosterone levels differ by race as well. A 1986 study found that the “twofold difference in prostate cancer risk” between black and white men could be explained by the “15% higher testosterone level” found in Black men.

But circulating levels of testosterone are not the only variable of interest. Many other factors, including enzyme activity and hormone exposure in utero, influence the impact of circulating hormones as well—and on these measures, too, we find generally consistent patterns in which Black subjects have the most androgenic hormone profile while East Asian subjects have least, with White subjects somewhere inbetween. A 1992 study found that “white and black men had significantly higher values of 3 alpha, 17 beta androstanediol glucuronide (31% and 25% higher, respectively) and androsterone glucuronide (50% and 41% higher, respectively) than Japanese subjects”—these being enzymes that convert testosterone into the more physiologically active hormone DHT.

Even further, It Is Not Just About Testosterone tells us that: “Vasopressin synthesis and the aromatization into estradiol both serve to facilitate testosterone’s effects.” So, guess what? “Vasopressin secretion in normotensive black and white men and women on normal and low sodium diets” found that “24-h urinary excretion of vasopressin was significantly (P<0·05) higher in men than in women and higher (P<0·05) in black than in white subjects.” And other studies confirm that Black children are exposed to higher hormone levels in utero—this one found “higher testosterone [and] ratio of testosterone to SHBG … in African–American compared to white female neonates”.

This last study is very significant.

We know that hormone exposure in the womb has drastic impacts on future behavior: Girls with congenital adrenal hyperplasia, a condition that only briefly spikes the level of hormones a developing girl is exposed to, have significantly more masculine behavioral traits despite the fact that there is no evidence that parents treat them any differently, or that there is anything different about them or the way they are “socialized” other than excess prenatal male hormone exposure. As found in a 2003 study of “Prenatal androgens and gender-typed behavior”, girls with CAH “were more interested in masculine toys and less interested in feminine toys and were more likely to report having male playmates and to wish for masculine careers. Parents of girls with CAH rated their daughters’ behaviors as more boylike than did parents of unaffected girls. A relation was found between disease severity and behavior indicating that more severely affected CAH girls were more interested in masculine toys and careers. No parental influence could be demonstrated on play behavior, nor did the comparison of parents’ ratings of wished for behavior versus perceived behavior in their daughters indicate an effect of parental expectations. The results are interpreted as supporting a biological contribution to differences in play behavior between girls with and without CAH.”

There is no reason to think that if out–of–wedlock birth and violent crime were to correlate due to genes, this would have to be because both behaviors are influenced by the same genes. It could be that the separate genes which contribute separately to each behavior just happen to correlate as well, with people who carry the first set of genes often carrying the second. However, it is at least plausible that the factors briefly identified here (testosterone and MAOA) actually could play a common role in producing both violent behavior and out–of–wedlock birth.

To my knowledge, outside the finding that persons with Brunner’s syndrome can be prone to hypersexuality, MAOA has never been studied in relation to sociosexuality directly. However, it seems fairly safe to infer that the kind of impulsivity which would lead a person to rack up credit card debt, or eat their way to obesity, or commit impulsive violent crimes, would also leave them prone to impregnate someone they haven’t married or end up divorced. And as far as testosterone, the studies on that one are clear: “people’s orientations toward sexual relationships, in combination with their relationship status, are associated with individual differences in testosterone.” More specifically, in “chaste” individuals with a restricted sociosexual orientation, testosterone rises when single but falls after acquiring a partner—but this doesn’t happen for those with an “unrestricted” orientation: as this study describes it, “partnered men who reported greater desire for uncommitted sexual activity had testosterone levels that were comparable to those of single men; partnered women who reported more frequent uncommitted sexual behavior had testosterone levels that were comparable to those of single women.”

Beyond that, we know that psychopathy both has a biological basis (psychopaths have a lower physiological response to their environments; in other words, it takes more to stimulate them) and is heritable, and we know that psychopaths “are twenty to twenty-five times more likely than non-psychopaths to be in prison, four to eight times more likely to violently recidivate compared to non-psychopaths, and are resistant to most forms of treatment” with “93% of adult male psychopaths in the United States in prison, jail, parole, or probation.”

We also know psychopaths are more likely to seek casual sex and avoid relationships—inevitably producing greater out–of–wedlock birth rates. The evidence, then, that unstable childhood environments produces criminals is weak—while unstable environments may raise the risk of criminal behavior during childhood, that effect only barely lasts into adulthood, if at all. In contrast, the evidence that there are genes which predispose a person to commit violent crimes, produce children out of wedlock, and divorce, and that these are passed on genetically to the children produced by these relationships regardless of their upbringing, is very strong. The facts aren’t particularly favorable to religious social conservatives, liberals, or mens’ rights activists: poverty doesn’t seem to be the primary cause of crime, but neither is single–parenthood (for religious social conservatives) or a lack of fathers (for MRAs).

Could the rate of psychopathy differ by race as well? I don’t know, but I was able to find some small indication that it might: judgment and the ability to discern smells are both localized to the frontal lobes of the brain, and research has linked poor sense of smell to psychopathy and aggression. Meanwhile, other research finds that men on average have a worse sense of smell than women—and blacks on average have a worse sense of smell than whites. (Update: See Razib Khan’s discussion of Lynn’s 2002 paper ‘Racial and ethnic differences in psychopathic personality’ and Skeem’s 2004 critical meta–analysis ‘Are there ethnic differences in levels of psychopathy?’).

Another study, this time in Finns, found that in addition to MAOA–L, a mutation of another gene known as CDH13 was heavily linked to extreme violent crime—and the effect of combining the two genes was more than additive. Meanwhile, a study in white and Hispanic Americans linked CHD13 to “a younger age of sexual debut”.

To reiterate, we don’t fully understand what is being inherited when criminality is inherited. But we know that criminality is highly heritable, no matter how well we do or do not understand the mechanisms of that heritability, through the converging results of years of twin, adoption, and family studies which all produce the same conclusion. The truth of this knowledge does not depend on the relevance of MAOA, testosterone, or psychopathy genes in particular, although I happen to think that very strong cases can be made for all of them. Likewise, we don’t fully understand what it is that is being inherited when promiscuous tendencies are being inherited, but we know that promiscuity is highly heritable all the same. But even the cursory evidence that behavioral genetics has produced so far at this point in time suggests several known mechanisms that just might not only be the culprits, but might even explain why some behaviors (like out–of–wedlock birth) tend to correlate with others (like violent criminality).

Divorce and out–of–wedlock birth may produce behavioral problems, but for the most part sociosexual behavior in parents and children correlates because of genes, not experiences; and the behavioral problems that result from divorce and out–of–wedlock birth per se appear not to last beyond childhood. There is perhaps a tiny impact of poverty on property crime—but none on violent crime. Genes are not deterministic, but the strongest verifiable impact by far out of all measurable impacts is that of genetic heredity on behavior.

A few disclaimers would, in an ideal world, be able to go without saying: the vast majority of men are neither violent criminals nor psychopaths; likewise, the vast majority of black people are neither violent criminals nor psychopaths. Nowhere in any of this reasoning should license be taken for blanket prejudice against all men, or against all blacks. The baseline rate of risk matters. Even if a man (or black) is 10x more likely to murder you on the street than a woman (or white), if your actual risk of being murdered by a man (or black) is 0.0001% and your actual risk of being murdered on the street by a woman (or white) is 0.00001%, then this hardly justifies viewing all men (or blacks) with suspicion and giving all women (or whites) a free pass. It is a minority of all people who are violence–prone. It is a minority of all men who are violence–prone; and it is a minority of all blacks who are violence–prone. But the minority of men who are violence–prone appears to be larger than the minority of women who are violence–prone, and the minority of blacks who are violence–prone also appears to be larger than the minority of whites, which in turn appears to be larger than the minority of Asians, who are violence–prone. And though no one explanation reveals everything, the strongest explanation of all explanations we do have is hereditarian. If stating these facts makes me racist, then it apparently also makes me twice as sexist—against myself—because the gap between men and women in violent crime is even larger than the gap between blacks and whites.

Is there anything that we can do with this sort of information? Much of the resistance that will form against an explanations of undesirable social phenomena that gives genes a larger role than environment, I believe, comes from the impression that if genes are responsible, then there’s nothing we can do about it—it seems to be a recipe for resignation. And even if eugenics could, in theory, improve human outcomes and behavior, few of us would want to come anywhere near trusting the State with the amount of power it would take to attempt it (I certainly wouldn’t).

Fortunately, it isn’t true. I’ll be discussing the various ways in which the ordinary policies we already contemplate can be evaluated for their “eugenic” and “dysgenic” impacts in the future—and how the policies that would turn out to be most beneficial in light of this analysis fit neatly into neither “conservative” nor “liberal” boxes. For example, if IQ or conscientiousness are heritable traits, then establishing maternity leave—a stereotypical “liberal” ideal—may help to encourage women with higher IQs and conscientiousness to have more children, rather than forego having more children for the sake of their careers—thus helping increase the IQ and conscientiousness of the general population, and no specter of violent Nazi concentration camps need be feared. Some warrior gene researchers have suggested another idea they think that their evidence warrants: preventing former violent criminals from purchasing alcohol, because the association between MAOA–L and violence is often mediated by alcohol

Whether that proposal would actually be effective or not, it’s an excellent example of the kind of idea we can start to try to think about, if the case laid out here is true. For another example, we can use our knowledge of the relationship between criminal behavior and genes to base sentencing lengths around verifiable statistics on the risk of re–offense. If anyone is afraid that an idea like this could be prone to abuse, they should remember that the current system already is rife with abuse: judges give around 65% favorable sentences soon after either of their two daily food breaks, but after each break and before the next that number falls to nearly 0%. Alternatives to a system that condemns a person because of how recently a judge ate lunch can hardly get much worse.

Just as identifying that the environment is the cause of some phenomena can allow us to start figuring out interventions which help reduce the impact of that environmental cause, so identifying genes as the cause of some phenomena can allow us to start targeting interventions there, too. When it comes right down to it, the fear that acknowledging biological roots to human behavior must end in violent dystopia is simply bizarre, as soon as we consider the fact that so many of the worst massacres of the 20th century were committed by blank slatists who believed just exactly the contrary—that human nature could be reformed through social control to their will.

As Christopher Szabo at Intellectual Takeout asks, “Why are we so understanding towards the crimes of communism?” Including the death toll from famines, many of which were in fact engineered intentionally, the death rate from Maoist communism was about 1.92 million killed per year (across 38 years for a total of around 77,000,000). In contrast, the death rate from Hitler’s Reich was about 1.75 million killed per year (across 12 years for a total of around 21 million). And as Mao literally wrote, “In class society there is only human nature of a class character; there is no human nature above classes.” So if acknowledging a biological basis to human behavior is supposed to be discredited because it evokes the massacres of the Nazis, why shouldn’t denying it be discredited because it evokes the massacres of the Communists? I don’t seriously believe that sociologists who promote social constructionism should be tarred by association with Communist genocides, but unless you want to admit that that whole line of reasoning is bullshit, turnabout is fair play.




A Footnote to “Is the War on Drugs Racist?”

In that essay, I quoted Frank Zimring’s position on the impact of the war on drugs on violent crime as so: He also argues (pp.90–99), correlating hospitalizations and deaths from overdose with changes in the known street price, that overall use of cocaine appears to have remained relatively constant across the period of time in which New York City’s crime drop took place. Yet, he notes (pp.91–92) that “The peak rates of drug–involved homicide occurred in 1987 and 1988”—the same year that 70% of arrestees were found to test positive for cocaine—“and the drop in the volume of such killings is steady and steep from 1993 to 2005. … The volume of drug–involved homicides in 2005 is only 5% of the number in 1990.” Meanwhile, whereas 70% of arrestees in the late 1980s tested positive for cocaine, by 1991 (see table 2 on page 14) this number hit a low of 62%—and in 1998 it had fallen all the way to 47.1%. By 2012 (see figure 3.7 on page 45) this number fell even further to 25%.

What happened here? Why would drug use amongst arrestees fall if drug use as a whole remained constant? Zimring has an important answer: “If I’m a drug seller in a public drug market and you’re a drug seller in a public market, we’re both going to want to go to the corner where most of the customers are. But that means that we are going to have conflict about who gets the corner. And when you have conflict and you’re in the drug business, you’re generally armed and violence happens. … Policing … [helped drive] drug trade from public to private space. … [this] reduced the risk of conflict and violence associated with contests over drug turf. The preventive impact [of these policies] on lethal violence seems substantially greater than its impact on drug use. … [And] once the police had eliminated public drug markets in the late 1990s, the manpower devoted to a special narcotics unit [whose funding had increased by 137% between 1990 and 1999] dropped quite substantially [and yet the policies’ impacts on homicide rates remained].”

However, Zimring is clearly incorrect that the drug war reduced drug–involved homicides without reducing drug use as a whole—the drug war reduced drug use, too.

Quoting James Q. Wilson in the Wall Street Journal in 2011: “Another shift that has probably helped to bring down crime is the decrease in heavy cocaine use in many states. … Between 1992 and 2009, the number of admissions for cocaine or crack use fell by nearly two-thirds. In 1999, 9.8% of 12th-grade students said that they had tried cocaine; by 2010, that figure had fallen to 5.5%.

What we really need to know, though, is not how many people tried coke but how many are heavy users. Casual users who regard coke as a party drug are probably less likely to commit serious crimes than heavy users who may resort to theft and violence to feed their craving. But a study by Jonathan Caulkins at Carnegie Mellon University found that the total demand for cocaine dropped between 1988 and 2010, with a sharp decline among both light and heavy users. … Drug use among blacks has changed even more dramatically than it has among the population as a whole. As Mr. Latzer points out—and his argument is confirmed by a study by Bruce D. Johnson, Andrew Golub and Eloise Dunlap—among 13,000 people arrested in Manhattan between 1987 and 1997, a disproportionate number of whom were black, those born between 1948 and 1969 were heavily involved with crack cocaine, but those born after 1969 used very little crack and instead smoked marijuana.

The reason was simple: The younger African-Americans had known many people who used crack and other hard drugs and wound up in prisons, hospitals and morgues. The risks of using marijuana were far less serious. This shift in drug use, if the New York City experience is borne out in other locations, can help to explain the fall in black inner-city crime rates after the early 1990s.”

Thus, because “drug use among blacks has changed even more dramatically than it has among the population as a whole”, if the black:white ratio of those in prison for drug use is larger than the black:white ratio of drug users in the general population, this may be because much of the disproportionately black number of users of cocaine have already been arrested—to the benefit of the black population as a whole.

Similarly“In a recent article in the American Sociological Review, my colleagues and I [Gary LaFree] found that a proxy measure of crack cocaine had a greater impact on big city crime than more common measures like unemployment.”

A 1994 study by Eric Baumer found that “… arrestee cocaine use has a positive and significant effect on city robbery rates, net of other predictors. The effect of arrestee cocaine use on homicide is more modest … [but] cocaine use elevates city violent crime rates beyond levels expected on the basis of known sociodemographic determinants.” And a 1997 Justice Department study found that “there was a very strong statistical correlation between changes in crack use in the criminal population and homicide rates … In five of the six study communities, … homicide rates track quite closely with cocaine use levels among the adult male arrestee population.”