Monthly Archives: May 2015

Are Sociologists More Employable Than a Pet Rock? A Case Study on Time Wise

A 2006 article from Tim Wise, What Kind of Card is Race? The Absurdity (and Consistency) of White Denial, lays out the well–known “anti–racist” author’s central thesis: white people are in a self–serving state of denial about the overwhelming extent to which rampant, systematic racism is still responsible for violently holding non–white members of society down—and therefore, by corollary, artificially propping them up—because it would threaten white peoples’ self–image to accept the self–abasing truth that their relative successes are largely the result of racism rather than any genuine hard work or achievement. A consistent underlying current of Wise’s rhetoric is that we can’t move forward on these problems so long as white people are too smug to admit that they are as severe as Wise tells us they are—and Wise apparently sees it as an important part of his mission to knock this self–confidence down a few pegs in order to pave the way for a more sombre accounting of the piteous state of affairs.

The article mentions a few different studies. But I’ve found it to be a significant rule of thumb that if I ever see a long list of studies plastered together to support a point, I’m probably going to find something surprising if I spend any significant amount of time digging into any particular one of them. I’ve only done that with one of the studies on this list, so that’s the one I want to talk about now. He writes: “That bringing up racism (even with copious documentation) is far from an effective ‘card’ to play in order to garner sympathy, is evidenced by the way in which few people even become aware of the studies confirming its existence. How many Americans do you figure have even heard, for example, (…) that persons with ‘white sounding names,’ according to a massive national study, are fifty percent more likely to be called back for a job interview than those with ‘black sounding names,’ even when all other credentials are the same?”

The study he refers to is Marianne Bertrand and Sendhil Mullainathan’s 2004 “Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment in Labor Market Discrimination.” The importance of the study in shaping perceptions can’t be stressed enough: a search on the Web of Science database for the topic “race” in the area“sociology” and domain of “social sciences” shows that the rough average of number of citations for the top 100 studies is around 600. Are Emily and Greg More Employable than Lakisha and Jamal? has been cited a whopping 1,555.

What the study claimed to have found is staggering: Quoting the National Bureau of Economic Research’s summary, “In total, the authors responded to more than 1,300 employment ads in the sales, administrative support, clerical, and customer services job categories, sending out nearly 5,000 resumes. The ads covered a large spectrum of job quality, from cashier work at retail establishments and clerical work in a mailroom to office and sales management positions. The results indicate large racial differences in callback rates to a phone line with a voice mailbox attached and a message recorded by someone of the appropriate race and gender.” The study ‘indicates that a white name yields as many more callbacks as an additional eight years of experience.’” An article at Salon reporting on the study said that “[We] have a national religion… [and] it’s Denialism. … [R]acism and white privilege dominate American society. … This truth is everywhere. … You can see it in a 2004 MIT study showing that job–seekers with “white names receive 50 percent more callbacks for interviews” than job seekers with comparable résumés and African American–sounding names.”

First, a nearly universal impression (which the authors of the study and articles summarizing it have apparently done little to correct) is that the study actually kept the resumes identical, except for race. Wise repeats the assumption almost directly when he says that “all other credentials” in this study (besides race) were “the same.” But this simply isn’t true. The study explains in its section IIA on page 994 of the AER (page 5 of the study) that it “begin(s) with resumes posted on two job search Web sites as the basis for [its] artificial resumes. ( . . . ) During this process, we classify the resumes within each detailed occupational category into two groups: high and low quality. In judging resume quality, we use criteria such as labor market experience, career profile, existence of gaps in employment, and skills listed. Such a classification is admittedly subjective . . . to further reinforce the quality gap between the two sets of resumes, we add to each high-quality resume a subset of the following features: summer or while-at-school employment experience, volunteering experience, extra computer skills, certification degrees, foreign language skills, honors, or some military experience.”

Nowhere in the study was there actually indication that the resumes were kept identical except for the racial connotation of the name of the applicant—and yet this claim was repeated widely throughout the media. Indeed, the fact that “high quality” resumes were given a different addition from one of the “high quality” sets of features already proves that they were not, in fact, identical. Further, in section IIC on page 996 of the AER (page 7 of the study), we read: ”For each ad, we use the bank of resumes to sample four resumes (two high-quality and two low-quality that fit the job description as closely as possible. In some cases, we slightly alter the resumes to improve the quality of the match, such as by adding the knowledge of a specific software program.” Yet again, we see that different resumes appear in fact to have had different qualifications.  And it continues: “The final resumes are formatted, with fonts, layout, and cover letter style chosen at random.” We will come back to this later—it actually does turn out to be quite potentially relevant, once we identify the study’s other flaws.

First, we have to ask how disparate the rate of callbacks between candidates actually were. When summaries state that the study “responded to more than 1,300 employment ads . . . sending out nearly 5,000 resumes,” this seems to imply that we’re dealing with very large numbers of difference in callback rates. However, as it turns out, that just isn’t the case. The study explains on p.7 that it uses both male and female names for sales jobs, but uses female names “nearly exclusively” for administrative and clerical jobs in order to ensure higher overall callback rates. In total, of 5000 resumes, 1124 were male. Of this total, 9 racially distinct names were then used to create 575 white and 549 black resumes. If we assume these names were divided amongst the total for each race equally, this means about 62 resumes were sent out for each name. The title of the study compares “Greg” to “Jamal,” telling us that the former received a 7.8% callback rate while the later received a 6.6% callback rate. (I’ll return to the question of the selection of names for the heading of the study momentarily.) If Greg received 7.8% callbacks out of 62 attempts, this means he received 5 actual calls. Meanwhile, if Jamal received 6.6% callbacks out of 62 attempts, this means he received 4 actual calls. The actual difference between them? One call. Yet, in the statistical framing often applied to the study, that one call actually represents “almost a 20% difference” (I’ll elaborate more on the way percent increase is calculated later as well).

The situation with Emily and Lakisha is only somewhat improved. Again, of 5000 resumes, 3876 were female. If we assume the 9 female names chosen for each race were distributed equally between 1938 white and 1938 black names, this means each name was sent out about 215 times. Emily’s 7.9% callback rate would thus translate into 17 actual calls; Lakisha’s 5.5% callback rate into 12 actual calls. Now it may look like we’re finding that the larger our sample size is, the more clearly we find the purported effect—the sample size just happens to be larger for the female applicants. But here is where it becomes relevant to look at the choice by the authors of the study of which names and associated percentages to use for the title of the study; it turns out that the general findings are simply nowhere near as dramatic as the isolated examples that the selective choice of individual names would imply.

In the chart on page 19, the percentage of callbacks acquired by each name are recorded. And if we look close enough, we notice something startling: if we average five of the white female names, Emily, Anne, Jill, Allison, and Laurie together, we get a callback rate of 8.5%. If we average four of the black female names, Latoya, Kenya, Latonya, and Ebony together, we get a callback rate of 8.95%—a 5.3% larger callback rate for the African–American applicants (even though, again, this is actually only a difference of 18 total white versus 19 total black callbacks).  If we include Laurie and Tanisha, this only drops to 8.7% versus 8.3%—a difference of 19 total white versus 18 total black callbacks. Why should this be? Why this overwhelming equality between more than half of the sample size? Why should Kristen receive an overwhelming advantage over Anne—13.1% (or 28/215 calls) to Anne’s 8.3% (or 19/215 calls)? And why should Brad receive such an advantage over Todd—15.9% (or 10/63 calls) to Todd’s 5.9% (or 4/63 calls)? The difference in the callback rate within races is as large as most of the between-race differences. Why would employers prefer Jamal more than Todd? Why would rampantly racist employers like Emily less than Kenya (which is the name of a black–majority country)?! And why would they discriminate against Aisha but not against Ebony, a name that literally means “black”?!

Why would Jermaine (9.6) and Leroy (9.4)—who together have an average 9.5% callback rate—beat Todd (5.9), Neil (6.6), Geoffrey (6.8), Brett (6.8), Brendan (7.7), Greg (7.8), and Matthew (9.0)—who together have an average 7.2% callback rate—by an additional 2.3%? Pay close attention to the often visually misleading way that percent increases are calculated in studies like these. The actual difference between 9.5% and 7.2% is ±2.3%. But 2.3 is 32% of 7.2; so the percentage increase from 7.2 to 9.5 isn’t 2.3%, even though that is the actual difference between them—it’s 32%. If the baseline risk of developing skin cancer is 0.005%, and one month of daily tanning bed use increases that risk by 50%, that sounds like a lot—enough to scare most people away from considering it. But what that actually means is that a mere 0.0025% (50% of 0.005) is added to the original, baseline risk of 0.005%, to arrive at a new risk of 0.0075%—or in other words, an extra 2 or 3 people per 100,000 who use a tanning bed daily for a month. In this case, the 32% advantage for the top two “black” names over the bottom six “white” names simply represents an average of about 6 calls for each of these “black” names and an average of 4.5 calls for each of these “white” names—or in other words, one additional call for every 42 attempts. 

Bertrand summarizes the meaning of the study when she writes that: “Applicants with white names need to send about 10 resumes to get one callback. Applicants with black names need to send about 15 resumes to achieve the same result.” In fact, however, what the study actually found was that black applicants named Jermaine or Leroy apparently need to send about 10, while white applicants named Todd need to send about 20. White applicants named Neil, Geoffrey, or Brett need to send about 15. Black applicants named Kenya needs to send about 11.5. And so on.

In any case, the authors of the study themselves actually acknowledge that this is a problem for their thesis. On pp.19–20, they write: “there is significant variation in callback rates by name. Of course, chance alone could produce such variation….” And this finally returns us full circle to the opening point: nowhere do the authors actually state that they did in fact send out identical resumes with only the names changed; and their description lends itself perfectly well to the interpretation that they chose existing resumes, altered them to their discretion, and then applied either a white or black name to that one particular resume before tossing it into either a high–quality or low–quality pile (rinse and repeat for all applicants). If so, this would perfectly well explain why Brad would perform a full 10% better than Todd (or somewhere close to a 300% increase), while Jermaine performs a full 6.6% better than Rasheed, Kristen performs a full 5.2% better than Emily, and Ebony performs a full 7.4% better than Aisha: they all had different resumes. Otherwise, what could explain the shameless supremacy of Kristen and Ebony?

Thus, the actual implication of this finding would in fact be exactly the opposite of what it was universally taken to have proved: how an applicant presented themselves—whether by way of fonts, layouts, and cover styles or by way of qualifications—actually had a far more significant impact on their likelihood of being called back in response to a job application than the racial connotations of their name. Only that, or else all the findings of the study being no more than the simple result of chance alone, could explain why the variation in callback rates between names within the two racial categories was so much greater than the variation between the two categories taken as a whole.

_______ ~.::[༒]::.~ _______

Some of these problems were explained to Tim Wise by A. R. Ward in a debate that took place between the two of them. The section in which Ward summarized these criticisms was published around Februrary of 2011. In January of 2012, Ward published the entire transcript of the whole debate on his website, writing: “After 5 rounds of back–and–forth I’ve decided to publish the debate for all to read. I’m still waiting for him to respond to my final entry (it’s been 9 months), and I’ll post his response if I get it.” On February 10, Ward updated the posting to link to Wise’s summary and stat that Wise’s final reply was supposed to be on the way. Yet, as of May 2015, there is not a trace that Wise has ever returned.

In fact, the summary that Wise did upload on February 9 included only the first two rounds of the debate—leaving out the part in which Wise faced specific criticisms he has never once responded to—and yet Wise accuses Ward of “perhaps needing attention, [since he] decided to go ahead and publish an incredibly partial, truncated excerpt from the debate on his site….” Wise claims that he wants the reader to “see each completed round as it currently stands, rather than just snippets intended to make one debater seem particularly absurd and the other especially bright,” and goes on to promise that “Upon finishing up my final statement, I will post his closing and then mine, for a fully completed debate.” However, Wise is in fact the one truncating the most trenchant criticisms against his own claims out of his summary of the debate—and now more than two years later, there is no indication to be found that he has ever provided the promised response.

And yet as recently as March of 2014, this study was still at the top of just five references Wise chose to employ in one of his major public speeches, where presumably he would want to restrict his choices to only the most powerful pieces of evidence to pack as much quality into a limited quantity of time possible. Having been made directly aware of the depths of the problems with this study, Wise has not seen fit to address them in any detail anywhere—despite having pledged to—and yet he still sees fit to quote this study as one of the most compelling pieces of evidence of just how overwhelmingly systematic the influence of racism is in employment in the United States today; and his rhetoric has not shifted so much as to even suggest any hint of his comprehension of the possibility that someone might not consider it so obviously overwhelmingly damning after all.

_______ ~.::[༒]::.~ _______

Do we have any other evidence suggesting what the impact of distinctly black names on employment prospects might be? As a matter of fact, we do. The Causes and Consequences of Distinctly Black Names was co-authored by Steven Levitt, the white economist of Freakonomics fame, along with Roland Fryer, a black economist who after an abusive childhood involving one parent’s abandonment and the other’s physical abuse became (to quote Freakonomics itself) “[a] full fledged gangster by his teens”—but later, in 1998, graduated magna cum laude from the University of Texas at Arlington while holding down a full time job. In 2008, at the age of 30, he became the youngest African–American to ever receive tenure at Harvard. He also maintains an office at the W. E. B. Du Bois Institute.

What were the findings of this study? In brief: “(…) We find … no negative relationship between having a distinctively Black name and later life outcomes….” The data set for this study was, without question, overwhelmingly more comprehensive than that conjured by the Mullainathan study — The Causes and Consequences of Distinctively Black Names looked at birth certificate information for every single child born in California since 1961, covering more than 16 million births — that is, births of real living people, not conjured hypothetical ones. Steven Levitt, in an article for Slate, explains: “how much does your name really matter? Over the years, a series of studies have tried to measure how people perceive different names. Typically, a researcher would send two identical (and fake) résumés, one with a traditionally white name and the other with an immigrant or minority–sounding name, to potential employers. The “white” résumés have always gleaned more job interviews. Such studies are tantalizing but severely limited, since they offer no real–world follow–up or analysis beyond the résumé stunt.

 The California names data, however, afford a more robust opportunity. By subjecting this data to the economist’s favorite magic trick—a statistical wonder known as regression analysis—it’s possible to tease out the effect of any one factor (in this case, a person’s first name) on her future education, income, and health.” And with these advantages, the study found “no relationship between how Black one’s name is and life outcomes….” [1] That is the finding of the single most overwhelmingly large study of the impact of the racial connotation of a person’s name on their chance of being hired conducted on measurements of real people instead of extrapolation from fictional “résumé stunts.”

Notably, this study has been cited only 265 times, in comparison to the Mullainathan study’s 1555.

That, for the record, is “an 83% reduction” in the citation rate.

_______ ~.::[༒]::.~ _______

In the time since this article was originally written, a new study has appeared (“Race and gender effects on employer interest in job applicants: new evidence from a resume field experiment”) which addresses the debates raised by these two studies directly. Specifically, they performed a “callback” study like Mullainathan’s, but when they did so this time, they corrected for the fact identified by Fryer and Levitt that distinctively black first names like “Precious” and “Tyrone” are correlated with poverty (with blacks of the same socioeconomic status, with or without these names, having perfectly equivalent life outcomes). And they did this by assigning the job applicants in their study distinctively black last names like Washington or Jefferson (up to 90% of people with these last names in the United States are black), and ambiguous first names used by both black and white Americans alike (Chloe, Ryan).

They found “little evidence of systematic employer preferences for applicants from particular race and gender groups.” They write: “Fryer and Levitt (2004) show that after taking into account the socioeconomic correlates of distinctively African-American sounding names, the large effect of these names on employer responses attenuates. Our findings provide evidence consistent with this point using newer, experimental data.” In other words, even if employers discriminate against “Precious Henderson” or “Tyrone Williams” because of the socioeconomic and cultural background their name indicates, it appears that they do not discriminate against “Morgan Jackson” or “Jordan Jenkins”.

_______ ~.::[༒]::.~ _______

[1] Yes; this study does leave open the possibility that race could be a significant variable even once other correlates are subtracted from it, since the study simply didn’t address this. But my purpose here is purely to address the Mullainathan study. And the Levitt & Fryer study most certainly does that. This post is not supposed to be an analysis of anything other than what I have clearly said it is supposed to be an analysis of: the soundness of this study, Wise’s integrity in handling criticism pertaining specifically to that particular study, and the ease with which such feeble data as this study actually contained can become unquestioningly transformed into so much more than it actually is throughout the media without anyone stopping to notice the obvious.

The Levitt & Fryer does, however, strongly suggest one thing that changes the analysis of the question of the impact of race on employment prospects when all else is held equal: to whatever extent employers in the real world discriminate against black applicants, they must discriminate about exactly as much against black applicants named “James” as they do against black applicants named “Jermaine”—or else about exactly as little. That can offer us a meaningful empirical basis for asking whether it’s more likely that employers discriminate against “James” significantly more, or “Jermaine” significantly less, on the basis of race (as opposed to any other number of factors that might correlate with race as well as one’s name) than we would have expected. In fact, since the Koedel, et al study referred to above investigated discrimination against black applicants with names like “Chloe Washington” (a simple Google search reveals several images of women named Chloe Washington, and all of them are black—the same thing goes for Ryan Washington, with the exception of a white reporter named Ryan at the Washington Post), we already have the answer to that question.

Consciousness (XIII) — The Epistemology of Death

_______ ~.::[༒]::.~ _______

Part 1.

The first fact about the near death experience worth considering is that the very existence of the experience is already incredibly unlikely, on the assumption that the physical structure of the brain “is,” or “produces,” the subjective experiences of the mind—that is, in other words, the very existence of the near death experience provides evidence against the very assumption used to rule out the possibility that near death experiences could represent something “real.” Forget the fact that these are “near death” experiences—the most basic and fundamental reason to find the near death experience intriguing is quite simply that it should be surprising to the materialist that it happens at all. The sheer fact that experiences of this type are even capable of happening at the time at which they occur, period, itself provides reasonable probabilistic evidence against the hypothesis—which throughout this series I have adamantly contended is (1) a philosophical hypothesis to begin with, not a scientific one; (2) not clearly rendered more probably true by any particular scientific facts; and (3) opposed by entirely plausible, strong philosophical arguments standing against it; and yet strikingly lacking support, giving the fervency with which belief in it is so often held, by any particular strong philosophical arguments in its defense—that first–person subjective, qualitative experience is produced by the otherwise blind motion of inert physical structures (in a brain or otherwise).

People who undergo Near Death Experiences describe them as feeling “more real than real.” And experiments confirm that memories of Near Death Experiences are indeed more vivid than memories of truly experienced events, and that recall of them looks nothing like recall of imagined memories when compared to them in brain scans. Quite plainly, if subjective conscious experiences are without exception either the product of, or identical to physical brain activity, then we should expect the subjective intensity of experience to correlate directly with the objective intensity of brain activity. Yet, in the Near Death Experience, this is categorically the opposite of what we actually see. “Cooper and Ring noted that [in ordinary waking life] a hallucination is accompanied by heightened brain activity. But their studies produced data showing that NDEs happened more often when neurophysiological activity was reduced, not increased. Sabom also found that NDEs were more likely when the person was unconscious for longer than 30 minutes; Ring found that the closer people were to physical death, the more extensive the NDE.” [1]  And other research continues to confirm that NDEs tend to be deeper—even with more reports of “enhanced cognitive powers” (such as the “enhanced powers” of memory recall during the “life review”), no less—the closer the subject is to death.

As Sam Parnia and Peter Fenwick write, “The occurrence of lucid, well–structured thought processes together with reasoning, attention and memory recall of specific events during cardiac arrest (NDE) raise a number of interesting and perplexing questions regarding how such experiences could arise. These experiences appear to be occurring at a time when cerebral function can be described at best as severely impaired, and at worst absent.” Bruce Greyson concurs: ”The paradoxical occurrence of heightened, lucid awareness and logical thought processes during a period of impaired cerebral perfusion raises particularly perplexing questions for our current understanding of consciousness and its relation to brain function. A clear sensorium and complex perceptual processes during a period of apparent clinical death challenge the concept that consciousness is localized exclusively in the brain.” That even the skeptics recognize that this is true is supported by the observation that one of the most common skeptical approaches is to argue that the near death experience actually doesn’t happen during clinical death, but is reconstructed at some other time (however implausible this suggestion may be—for reasons we will see, as well as one we already have: recall of memories of the near death experience look nothing like recall of imagined memories, and these memories consistently contain more details than memories of either real or imagined events).

_______ ~.::[༒]::.~ _______

1.1

The usual skeptical approach to the near death experience is to outline physical factors that can produce experiences with vague similarities to certain aspects of the near death experience. The effects of a sufficient dose of DMT can be similar to the typical NDE, for example—so perhaps the brain releases DMT as it approaches death. Depriving the brain of oxygen can loosely replicate some features as well, as can electrical stimulation applied to the temporal lobe.

The problem however, is twofold: first, no particular one of these features comes anywhere close to being able to do more than capture vague resemblance to a small handful of the core characteristics of the near death experience; and second, near death experiences seem to be capable of happening in an extremely wide range of physical circumstances, so that any particular physical element which might be proposed to play a role in producing the experience therefore appears to necessarily be entirely lacking in some significant percentage of cases.

Most fundamentally, any attempt to explain the near death experience through physiological features will be undermined by the fact that NDEs can occur simply because death appears to be imminent, without the subject’s being physically near death at all—and when NDEs occur in these circumstances, they carry all the prototypical features of NDEs that occur when a subject is actually physically close to death. And yet, researcher P. M. H. Atwater, pediatrician Melvin Morse, ICU nurse Penny Sartori amongst many others have all documented the fact that NDEs happen in children under the age of five, and even in children as young as six months old—and in all cases, they carry all the same basic features as they do when they occur to adults. (For more on reports from children’s near death experiences, see: Bush, 1983; Gabbard & Twemlow, 1984; Herzog & Herrin, 1985; M. Morse, 1983, 1994; M. Morse, Conner, & Tyler, 1985; M. Morse et al., 1986; Serdahely, 1990).

Dr. Jeffrey Long says of his own studies on near death experiences in young children: “ … their average age was 3–1/2 years old. These are children so young that to them, death is an abstraction. They don’t understand it. They can’t conceptualize it. They’ve almost never heard about near–death experiences; have no preconceived notions about that. They certainly have far less cultural influence, both in terms of religion or anything else that could even potentially modify the near-death experience at that tender young age. And yet looking at these same 33 elements of near–death experience that I did in other parts of this study, I found absolutely no statistical difference in their percentage of occurrence in very young children as compared to older children and adults.” (On a related note, NDEs also occur to people who are struck by death—say, through sudden cardiac arrest, or being struck with a vehicle they hadn’t realized was approaching them—too quickly to have any concept of what is happening.) Facts like these would seem to render a physiological account a more plausible way of dismissively explaining the NDE than a psychological account. But yet, once again, any physiological feature which might bear some relation to the NDE will be missing from many accounts—and some accounts will lack all of them. (Similarly, while positive experiences might theoretically be explained by things like wish–fulfillment, there are both “hellish” experiences and people who simply experience the ordinary phenomenology of the near death experience as hellishly terrifying in its own right.)

Capacity to experience the NDE is not limited by personality type. In The Handbook of Near Death Experiences (2009), Bruce Greyson and Janice Holden conclude a survey of the evidence for personality–type factors in NDEs: ”[R]esearch has not yet revealed a [personality] characteristic that either guarantees or prohibits the occurrence, incidence, nature or after–effects of a near death experience.” People who have had NDEs do not differ from those who do not in terms of “‘sociodemographic variables, social support, quality of life, acceptance of their illness, [or] cognitive function (as assessed using a standard instrument, the Mini–Mental State Exam)….” And there is no correlation with prior religion or religiosity, even though “a significant correlation was found between the depth of the NDE and a subsequent increase both in the importance of religion and in religious activity.” Any psychological explanation of the NDE must face the fact that the core structure of the near death experiences is as consistent as it is despite its occurrence not being related in any way so far identified to the subject’s prior expectation.

_______ ~.::[༒]::.~ _______

1.2

Blood and cerebral levels of oxygen and other gases like carbon dioxide play a major role in skeptical counter–explanations of the NDE. But reduction of oxygen levels to the brain produces confusion, and leads to impairment in memory formation (see also)—yet, as already mentioned, near death experiences are almost always experienced vividly and remembered with striking clarity. Dr. Sam Parnia notes that people whose oxygen levels fall “become agitated and acutely confused … [and] develop “clouding of consciousness” together with highly confused thought processes with little or no memory recall. … those who have NDEs have an excellent memory of the experience, which often stays with them for decades. … [they experience] the complete opposite of an acute confusional state.” Furthermore, “patients with low oxygen levels don’t report seeing a light, a tunnel, or any of the typical features of an NDE … this experience has never been reported by any other doctor or scientific study as a feature of a lack of oxygen.” Blood levels of both oxygen and carbon dioxide have been measured in NDE patients, and sometimes maintained by heart–lung machines—so we have good reason to believe NDEs have occurred in patients without abnormal levels. Although blood levels of carbon dioxide may not accurately reflect levels present in the brain, and so it is possible that this hasn’t ruled out a role for carbon dioxide; “raised carbon dioxide was an extremely common problem in clinical practice, [but] we hardly ever saw anyone have an NDE–type event. Also, there [have] been many studies … on the effects of increased carbon dioxide and these [have] not shown that it [leads] to NDE–like states.”

More importantly, the authors of a review in Frontiers of Human Neuroscience write: “In a sudden severe acute brain damage event such as cardiac arrest, there is no time for an experience of tunnel vision from retinal dysfunction, given that the brain is notably much more sensitive to anoxia and ischemia than peripheral organs … Fainting due to arterial hypotension—a common event—does not seem to be associated with the tunnel visions described in NDEs. … NDEs are not reported by patients using opioids for severe pain, while their cerebral adverse effects display an entirely different phenomenology in comparison to NDEs (Mercadante et al., 2004; Vella-Brincat and Macleod, 2007). Morse also found that NDE occurrence in children is independent from drug administration, including opioids (Morse et al., 1986). … Evidence against simple mechanistic interpretations comes also from a well-known prospective study by van Lommel et al. (2001), which showed no influence of given medication even in patients who were in coma for weeks. Factors such as duration of cardiac arrest (the degree of anoxia), duration of unconsciousness, intubation, induced cardiac arrest, and the administered medication were found to be irrelevant in the occurrence of NDEs. Also, psychological factors did not affect the occurrence of the phenomenon: for instance, fear of death, prior knowledge of NDE, and religion were all found to be irrelevant.”

Quoting from page 376 of Irreducible Mind: “Experiences often differ sharply from the individual’s prior religious or personal beliefs and expectations about death (Abramovitch, 1988; Ring, 1984). People who had no prior knowledge about NDEs describe the same kinds of experiences and features as do people more familiar with the phenomenon (Greyson, 1991; Greyson & Stevenson, 1980; Ring, 1980; Sabom, 1982). … If NDEs are significantly shaped by cultural expectations, we might expect that experiences occurring after 1975, when Moody’s first book made NDEs such a well–known phenomenon, would conform more closely to Moody’s “model” than those that occurred before that date. This does not appear to be the case (Long & Long, 2003). Similarly, a study of 24 experiences in our collection that not only occurred but were reported before 1975 found no significant differences in the features reported, when compared to a matched sample of cases occurring after 1984, except that fewer “tunnel” experiences were reported in the pre–I975 group (Athappilly, Greyson, & Stevenson, 2006).”

However, despite the fact that fear of death and religion play no predictive role in whether or not someone will have an NDE, clear differences remain for years after the brush with death between those who have had them. Writing in the 2011 book Neuroscience, Consciousness, and SpiritualityPim Van Lommel says that: “ … the infrequently noted fear of death does not affect the occurrence of a NDE either, … whether or not patients had heard or read anything about NDE in the past made no difference … [And] any kind of religious belief, or indeed its absence in non–religious people or atheists, was irrelevant ….” Yet, “Among the 74 patients who consented to be interviewed after 2 years, 13 of the total of 34 factors listed in the questionnaire turned out to be significantly different for people with or without an NDE. The second interviews showed that in people with NDE fear of death in particular had significantly decreased while belief in an afterlife had significantly increased. … [And] after 8 years … clear differences remained between people with and without NDE, … In particular, they were [still] less afraid of death and had a stronger belief in an afterlife.”

Temporal lobe seizures have been proposed to play a role on the basis that temporal lobe epileptic episodes sometimes have some superficial similarities with the NDE, but once again—temporal lobe seizures are associated with dramatic memory loss. Automatisms don’t occur in association with near death experiences, either. As neuroscientist Mario Beauregard writes, “Review of the literature on epilepsy …indicates that the classical features of NDEs are not associated with epileptic seizures located in the temporal lobes … [and] the experiences reported by participants in Persinger’s [transcranial magnetic stimulation] studies bear little resemblance with the typical features of NDEs.” The authors of Irreducible Mind: Towards a Psychology for the 21st Century write (p.396): “[The] neurosurgeon Wilder Penfield … is widely reported as having produced … NDE–like phenomena in the course of stimulating various points in the exposed brains of awake epileptic patients being prepared for surgery. Only two out of his 1132 patients, however, reported anything that might be said to resemble an OBE: One patient said: ‘Oh God! I am leaving my body.’ Another patient said only: ‘I have a queer sensation as if I am not here… As though I were half here and half there.’ In later studies at the Montréal Neurological Institute…, only one of 29 patients with temporal lobe epilepsy reported “a ‘floating sensation’ which the patient likened at one time to the excitement felt when watching a football game and at another time to a startle” (Gloor et al., 1982, pages 131–132). Such experiences hardly qualify as phenomenologically equivalent to OBE.”

The authors of the earlier Frontiers review conclude: “Anesthesia can suppress consciousness by simply interrupting binding and integration between local brain areas without the need for suppressing EEG activity (Alkire and Miller, 2005; Alkire et al., 2008). This is the reason why, in clinical practice, general anesthesia can be associated with almost normal EEG with peak activity in the alpha band (Facco et al., 1992), while in deep, irreversible coma, consciousness can be lost even with a preserved alpha pattern activity (Facco, 1999; Kaplan et al., 1999). In short, loss of consciousness can occur with preserved EEG activity, while, in the case of a flat EEG, neither cortical activity nor binding can occur; furthermore, short latency somatosensory–evoked potentials, which explore the conduction through brain stem up to the sensory cortex and are more resistant to ischemia than EEG, have been reported to disappear during cardiac arrest (Yang et al., 1997). The whole of these data clearly disproves any speculation about residual undetected brain activity as a cause for some conscious experience during cardiac arrest.”

Bruce Greyson concurs:  “In our collection at the University of Virginia, 22% of our NDE cases occurred under anesthesia,  and they include the same features as other NDEs, … functional imaging studies that have looked at blood flow, glucose metabolism, or other indicators of cerebral activity under general anesthesia (Alkire, 1998; Alkire et al., 2000; Shulman et al., 2003; White & Alkire, 2003) … [confirm that] brain areas essential to the global workspace are consistently greatly reduced in activity individually and may be decoupled functionally, thereby providing considerable evidence against the possibility that the anesthetized brain could produce clear thinking, perception, or memory. …  [And] the situation is even more dramatic with regard to NDEs occurring during cardiac arrest … In cardiac arrest, even neuronal action–potentials, the ultimate physical basis for coordination of neural activity between widely separated brain regions, are rapidly abolished (Kelly et al., 2007). Moreover, cells in the hippocampus, the region thought to be essential for memory formation, are especially vulnerable to the effects of anoxia (Vriens et al., 1996). In short, it is not credible to suppose that NDEs occurring under conditions of general anesthesia, let alone cardiac arrest, can be accounted for in terms of some hypothetical residual capacity of the brain to process and store complex information under those conditions.”

Finally, Van Lommel (in Neuroscience, Consciousness, and Spirituality): “Through many studies with induced cardiac arrest in both human and animal models cerebral function has been shown to be severely compromised during cardiac arrest, with complete cessation of cerebral blood flow (Gopalan et al. 1999), causing sudden loss of consciousness and of all body reflexes, but also with the abolition of brain–stem activity with the loss of the gag reflex and of the corneal reflex, and fixed and dilated pupils are clinical findings in those patients. And also the function of the respiratory centre, located close to the brainstem, fails, resulting in apnoea (no breathing). The electrical activity in the cerebral cortex (but also in the deeper structures of the brain in animal studies) has been shown to be absent after 10–20 s (a flat-line EEG) (De Vries et al. 1998; Clute and Levy 1990; Losasso et al. 1992; Parnia and Fenwick 2002). … Moreover, although measurable EEG–activity in the brain can be recorded during deep sleep (no–REM phase) or during general anesthesia, no consciousness is experienced because there is no integration of information and no communication between the different neural networks (Massimini et al. 2005; Alkire and Miller 2005; Alkire et al. 2008). So even in circumstances where brain activity can be measured sometimes no consciousness is experienced. A functioning system for communication between neural networks with integration of information is essential for experiencing consciousness, and this does not occur during deep sleep or general anesthesia, let alone during cardiac arrest.

_______ ~.::[༒]::.~ _______

1.3

A 2013 study on death from cardiac arrest in rats was supposed to be interpreted by some skeptics as casting doubt on this when it found that EEG measurements recorded gamma waves (the highest possible frequency) in the brains of rats dying of induced cardiac arrest. This was particularly compelling because, since the late 80’s, it has been proposed that the synchronized firing of neurons in the gamma range could be responsible for how subjective experience becomes “bound”—that is, how experience unifies multiple modes of sensory input in one unitary stream of experience, despite the fact that these processes are spread out in the brain without ever meeting together at any central point that might theoretically represent ‘the place’ in the brain from which we ‘see’ all of these inputs ‘together’. However, more recent studies confirm that gamma waves are not, in fact, direct correlates of conscious perception—“most [previous] studies manipulated conscious perception by altering the amount of sensory evidence, [so] it is possible that they reflect prerequisites or consequences of consciousness rather than the actual [neural correlate of it]. Here we directly address this issue … [and results contradict] the proposal that local gamma band responses in the higher–order visual cortex reflect conscious perception.” Other research shows that gamma waves measured by EEG can represent nothing more than “miniature saccades [eye motions] instead of cognitive or neuronal processes.” (A further review of that data can be found here).

Sam Parnia notes that “After blood flow to the brain is stopped, there is an influx of calcium inside brain cells that eventually leads to cell damage and death … That would lead to measurable electroencephalography (EEG) activity, which could be what is being measured.” Other previous research already existed to confirm his suspicion, noting that EEG waves after decapitation, for example, can be “caused by membrane potential oscillations that occur after the cessation of activity of the sodium–potassium pumps has lead to an excess of extracellular potassium. … this sudden depolarization leads to a wave in the EEG.” Another review explains: “The term spreading depolarization describes a wave in the gray matter of the central nervous system characterized by swelling of neurons, distortion of dendritic spines, a large change of the slow electrical potential and silencing of brain electrical activity (spreading depression) … Spreading depolarization is induced experimentally by various noxious conditions including chemicals such as potassium….” And the rats were, in fact, killed by an “intracardiac injection of potassium chloride.” Converging lines of evidence suggest that it is entirely probable that no subjective experiences were associated with these EEG waves at all; and in any case, gamma waves have never been measured in any human subjects (much less who weren’t injected with potassium chloride) in relation to any near death experience. This was yet another case of unfounded media hype, where anything that even remotely seems to support the reductionist case gets easy publicity (to be fair, poorly reasoned points that can be sensationalized tend to get easy publicity in general—but only in the case of claims interpreted as supporting reductionism do so many otherwise intelligent people get so easily suckered in).

As neuropsychiatrist Peter Fenwick and his wife Elizabeth write in a book reviewing more than 300 near death experiences, “While you may be able to find [skeptical explanations] for bits of the Near-Death Experience, I can’t find any explanation which covers the whole thing. You have to account for it as a package and skeptics … simply don’t do that. … They vastly underestimate the extent to which Near–Death Experiences are not just a set of random things happening, but a highly organized and detailed affair.” In short, for every single proposal for any particular physiological basis for the near death experience remains it extremely speculative to suppose that it actually does play any definite role. Substantial problems and difficulties face each individual suggestion; the skeptic skirts this  by supposing we can simply combine any number of such factors ad lib to arrive at the NDE’s phenomenology. Of course, the skeptic can always say that there is no special burden to provide a specific justified explanation of the NDE, that any number of variables in any combination could conceivably be triggering the NDE in different circumstances, and that an explanation of this sort should stand as the epistemic default unless it can be categorically disproven by the realist. The playing field, on this approach, isn’t equal (and it renders skeptical counter–hypothesis unfalsifiable for the foreseeable future): the fact that we can’t positively rule out x is supposed to make it unreasonable to believe y; but if we can’t positively rule out y, this isn’t supposed to make it unreasonable to believe x. But why should this be the case?

This could only be asserted because of an assumption that presuming subjective conscious experience to be nothing more than the epiphenomenal byproduct of physical brain activity is an epistemic default due to “parsimony” in the first place—yet it is just exactly this position which I have argued is not just epistemically unjustified given that nobody has a damned clue how blind physical processes could possibly “produce” subjective first–person experience (and such mechanisms, whatever they are, may hardly be “parsimonious”); it is falsified by the fact that it would entail that we could neither think nor talk about consciousness–per–se (and despite first appearances, panpsychism doesn’t solve the problem, either). Thus, my interest is not in the question of whether a “realist” interpretation of the NDE can be definitely demonstrated to be undoubtably true on purely neutral philosophical grounds. 

No skeptical counter–hypothesis can be definitively demonstrated to be anywhere near undoubtably true, either; and the skeptic hardly proceeds from purely neutral philosophical grounds. Indeed, that he does not do so is probably the single most important point to take away from all this: skeptical hypotheses towards the NDE are not believed because of how compelling the independent evidence is in their favor; these hypotheses are believed because of the insistence, born of an a priori conviction in the truth of materialism, that some explanation of the sort simply must be true because materialism in general is. And yet, if anything could possibly count as evidence against materialism, it would be evidence like this—which is dismissed by the materialist because it isn’t compatible with materialism.

My own interest is in what one can reasonably believe. And having argued in detail that one can more than reasonably believe that consciousness is not reducible in principle to physical mechanism (but is, instead, a “bedrock” phenomena in the world all in its own right), my conclusion extends to entail that one can reasonably believe that the near death experience could very well be just what it appears to be: an experience of the separation of consciousness from the body and brain. To the extent that there is simply no compelling justification (beyond prejudice) for confidence in the philosophical idea that qualitative, subjective experience is wholly and completely reducible to physical mechanism in the first place, there is no compelling justification (beyond prejudice) for confidence that any particular reductionist explanation of the near death experience is especially likely to be true. Any insistence otherwise plainly rests not on the independent plausibility of these reductionist explanations, but instead in the a priori conviction borne solely from philosophical prejudice that some reductionist explanation must be true—with this a priori conviction in place, the fact that it is conceivable that the patient near death has some residual brain activity we can’t currently measure, or that it can’t be definitively refuted that some complex combination of factors, none of which independently come anywhere near explaining the whole experience, and each of which seem entirely lacking in at least some large number of cases, could combine in any number of ways (and no matter how combined still produce the archetypical NDE) is—for the skeptic—enough. But for those of us who reject the claim that there is sufficient justification for such confidence in this a priori conviction in the first place, it isn’t.

Of course, much of this discussion of underlying neurophysiological correlates of the near death experience rather misses the point—for even interpreting them so that these would be evidence against the reality of the experience itself merely presupposes the philosophical position in which subjective experience is solely ‘produced by’ the physical activity of the brain. What precisely do we think we’re disproving if we identify the causes of onset of a near death experience? It simply wouldn’t follow from the fact that the trigger of the event is physical that the entire experience is purely physical, any more than it follows from the fact that the trigger of a note sounding out of a piano is the motion of a hand against a key that the entire experience of sound is composed of nothing but hands and keys. A balloon is a separate ‘thing’ from the string tying it to the ground, but the balloon still can’t float away unless the string which holds it is cut—and it doesn’t follow from this that the event of a balloon floating into the air is just nothing other than an event produced by strings whenever, in general, they are cut. Nor would correlations between how high in the air the balloon has risen and how far towards the ground the string has fallen in the milliseconds proceeding the cut prove that the state of the former was a direct function of the latter—even though such correlations will always be found.

Supposing the near death experience did involve perception of something real as a result of consciousness dissociating from the body, surely the mind–body connection is such that it is in response to actual death that consciousness dissociates from the dying brain—and surely there should be some combination of physical events which can be identified as the most proximate correlates of “death.” Hence, in order to sufficiently “debunk” the reality of the near death experience, the skeptic cannot just identify what physiological event corresponds with “death”, the point at which the experience occurs. Not even this goal has actually been empirically met—yet even if it ever should be, more would still be needed to establish that this was in fact anything more than the identification of the trigger which causes consciousness to separate from the brain and undergo the near death experience. Any confident dismissal of the reality of the near death experience based on less than this is, once again, simply unjustified philosophical prejudice—unless and until some compelling general proof of materialism as a whole is put forward.

_______ ~.::[༒]::.~ _______

1.4

I’ve argued already that the very existence of a near death experience is surprising on the assumption that subjective conscious experience is either identical to, or a secondary, epiphenomenal byproduct “produced” by, the objectively measurable physical activity of the brain—but the nature of the experience itself is remarkable, too. Consider the effects of psychoactive drugs, delirium, and other “hallucinatory”–type states: the subjective effects of psychedelic drugs like DMT, and Ketamine vary tremendously between experiences. Some DMT or Ketamine experiences can resemble the near death experience in certain features, but there is remarkably little consistency between any two or three experiences with one of these drugs. DMT users encounter everything from “self–transforming machine elves” to “ a multi–eyed, multi–serpent” to “an alien wasp” to “dolls in 1890s outfits, life–sized … women  in corsets … red circles painted on their cheeks …  big breasts and big butts and teeny skinny waists …all whirling around me on tiptoes. The men had top hats, riding on two–seater bicycles.” On Ketamine, John Lilly encountered “[the aliens] who manage Earth Coincidence Control, your local branch of Cosmic Coincidence Control.” Others watch “every other entity within this realm begin to connect to one another, to become one…” or see “one face … that seemed very large and its features were constantly distorting themselves … [it] screamed, at such a volume that is not possible for any earthly speaker….”

There are a handful of variations across cultures in how the details of various stages of the near death experience are ‘filled in’: in the West, NDErs are usually “sent back” while being told that they must return to life to finish carrying out their ‘purpose,’ whereas a number of Indian accounts apparently involve the subject being told there was a bureaucratic mistake and that they aren’t the person whose death was expected. But this is as dramatic as the variations between various near death experiences get—and other than that, the core features are remarkably consistent across different times and places (and even here, they still fit the form of the subject being “sent back” within the vision prior to the experience of actually returning to their bodies). Why, if nothing produces the NDE besides a coincidence of converging chemicals, do they not become as varied as experiences with drugs like DMT or Ketamine? Why does no one ever find themselves at a circus watching dancing marionettes, talking to “multi–eyed serpents” or “alien wasps” or “self–transforming machine elves,” or getting screamed at by enormous distorting faces? This comparison is hardly irrelevant given that Ketamine (or a hypothetical Ketamine–like endogenous substance as yet identified) and DMT (which is in fact produced in some amounts endogenously within the brain) have both been proposed seriously by skeptics to play a direct role in producing near death experiences.

In light of facts like these, the striking similarities between near death experiences deserve explanation just as much as any dissimilarities do. Dr. Jeffrey Long notes that “The percentage of time that people encounter deceased relatives is extremely high. It was actually 96% in the NDERF study … [and] that’s actually corroborated by another major scholarly study … The important thing is that any other experience of altered consciousness that we experience on earth, dreams, hallucinations, drug experiences, you name it; all of these other types of experiences of altered consciousness, … You’re going to remember the banker that you did business with that day or your family member you said hi to as you were walking into the house. This is what’s in the forefront of consciousness.” It is intriguing, in this vein, to note that the “dreamlets” produced in fighter pilots during periods of unconsciousness induced by loss of cerebral oxygen through rapid acceleration in a centrifuge studied in 1997 by Dr. James Whinnery “frequently included living people, but never deceased people….” Would so few people rendered unconscious by rapid acceleration ever believe in the heat of the confusion that they had died? Would the same “expectations” proposed to explain the near death experience (despite the fact that fear of death, religion, and degree of religiosity have been found to have no predictive power over who will have an NDE) not show up here? (For that matter, it is striking that even amongst the incredibly intense variety of experiences reported by users of DMT, I have never heard a single one which actually ever paralleled the stages of the “real” near death experience directly. For all the interaction with ‘alien intelligences,’ for one thing, I’ve not once heard a single report of anyone apparently encountering a deceased relative.)

Once again, there is a compelling convergence of evidence: “[P]eople close to death are more likely to perceive deceased persons than do healthy people, who, when they have waking hallucinations, are more likely to perceive living persons (Osis & Haraldsson, 1977/1997). NDErs whose medical records show that they really were close to death also were more likely to perceive deceased persons than experiencers who were ill but not close to death, even though many of the latter thought they were dying (E. W. Kelly, 2001). … in one–third of the cases the deceased person was either someone with whom the experiencer had a distant or even poor relationship or someone whom the experiencer had never met, such as a relative who died long before the experiencer’s birth (E. W. Kelly, 2001).”

If the near death experience simply results from the lucky, surprising convergence of simultaneous chemical coincidences, then correlations like these—and the consistency of the form of the experience in general—is an absolutely astounding, unbelievable coincidence. Not only are we expected to believe that the experiencing subject enters a state of profoundly heightened awareness precisely when his brain activity becomes the most suppressed, and that the consistency of the form of the near death experience is always produced by this complex cocktail of factors despite the fact that it can occur in the apparent total absence of any of them and still retain the essence of exactly the same form, with no one ever reporting the disorganized or chaotic imagery of meeting DMT “machine elves” or the President of the United States or giant, distorted screaming faces or an environment like Blade Runner or the alien managers of “Earth Coincidence Control” or the planet Gallifrey after some particular factor changes, but correlations between the depth of the near death experience—even down to details such as how likely deceased persons were “encountered”—and the actual proximity to death exist by sheer coincidence. At some point, it just isn’t clear anymore whether the reductionist explanation really would even be more “parsimonious” supposing we could somehow start out with perfectly neutral philosophical presuppositions. The skeptic is left in the position of having to defend an increasingly wide range of utterly ad hoc theoretical factors which are supposed to mix and match ad lib to produce the experience and yet, no matter how they vary or even lack some of these factors entirely, still produce almost exactly the same core experience every time (at least so long as drugs are not involved). This is quite simply a tremendous far cry from anyone actually having anything like a justified reductionist account of the NDE.

Admitting the possibility that the NDE could be just what it appears to those who experience it to be—that consciousness simply can have experiences while separate from the brain—is not less “parsimonious” than any possible materialist explanation of the experience, even if we were approaching the question from theoretically neutral grounds. “Parsimony” is a relevant consideration against admitting the existence of something ‘new’ when all else is equal; but the more mechanisms one has to add and the more ad lib combinations of them one has to defend in order to avoid admitting that that something ‘new’ is just what it appears to be, the less “equal” things actually are and the less force considerations of “parsimony” have. All else equal, admitting the existence of a new species is not “parsimonious.” Indeed, when the platypus was first discovered, early investigators believed it was a hoax: “It was plausible, [Dr. George] Shaw thought, that some punk had collected the bill of a duck and an otter or mole’s body, then shipped it off from Australia as a joke.” But the more ad hoc hypotheses these investigators had to add to the ‘hoax’ hypothesis to avoid the conclusion that the platypus was nothing other than just precisely what it seemed to be, the less plausible—and “parsimonious”—the ‘hoax’ hypothesis became. To be clear, I don’t claim that the near death experience is exactly like this; but I do claim that it is somewhere much closer to this than it is to, say, the claim that there are “fairies at the bottom of my garden” which have never been observed.

I am reminded of David Chalmers’ statement about interpreting quantum physics: “[P]hilosophers reject interactionism on largely physical grounds (it is incompatible with physical theory), while physicists reject an interactionist interpretation of quantum mechanics on largely philosophical grounds (it is dualistic).” Likewise here: Skeptics reject realist interpretations of the near death experience—a “scientifically” observed event—simply because it is dualistic; and yet they reject dualism because it is “unscientific.” Yet, it is apparent that “science” in this sentence does not mean “direct scientific observation,” but rather—and much differently—“how we prefer to interpret our scientific observations.” But exactly what are these preferences supposed to be justified by?

When circularity runs this deep, it is clear that something other than the points of the circle are doing all the work of actually holding the circle up. I recall, once again, John Searle’s admission (which I quoted here): “Acceptance of the current [physicalist] views [in philosophy of mind] is motivated not so much by an independent conviction of their truth as by a terror of what are apparently the only alternatives. That is, the choice we are tacitly presented with is between a “scientific” approach, as represented by one or another of the current versions of “materialism,” and an “unscientific” approach, as represented by Cartesianism or some other traditional religious conception of the mind.”

_______ ~.::[༒]::.~ ______

Part 2.

Suppose I know that my niece is in the hospital with a non–life–threatening condition, but I know that she is tied up with tubes that prevent her from leaving the hospital bed. My niece, Jane, has lots of friends; and I am aware as part of my background knowledge of my relationship with her that I don’t know who all of her friends are. Now, suppose that she gives me some piece of information about the hospital that she couldn’t have gotten herself, given that she has been strapped in place without moving: say, that there is a shoe sitting on a ledge outside the window on a different floor of the hospital. And suppose Jane tells me that she found this out because one of her friends, Joy, came by and told her about it.

No one has a direct record of Joy entering the hospital—but she might have simply made her way in without signing her name. I don’t know who Joy is, so I can’t independently verify (as yet) that she was in fact at the hospital that day—but I already realize I simply don’t know who all of Jane’s friends are in the first place, so I clearly can’t use this as grounds for ruling out her existence. Aren’t I justified in believing her? Unless (and until) I can independently prove the truth of some alternative means by which Jane actually came by this information, I think it is obvious that the answer is a clear “yes; of course.” Any ordinary individual would come to accept that a friend named Joy must have stopped by the hospital without any hesitation.

Suppose that rather than it being I who visited Jane, it was my brother Joe who visited her in the hospital and then relayed this story to me second–hand. Even then, don’t I still have adequate reason to accept on the basis of this information itself that Jane must have a friend named Joy, and that Joy must have come by the hospital and mentioned this random detail to my niece, whether I can independently verify these claims or not—unless I can independently refute them, or I find overwhelmingly good reason to conclude that my brother positively must be lying? Once again, I think it is obvious that the answer is a clear “yes; of course.” Most people would consider it flagrantly absurd if I were to insist that everyone involved must absolutely and positively disprove even the bare possibility that a worker at the hospital, or one of Jane’s friends whose names I already know, could even conceivably have relayed this information to Jane instead before I would simply accept that there must be a friend named Joy I haven’t met yet. Such strict standards would lead me to deny the existence of friends Jane actually does in fact have, and the occurrence of events which actually did in fact take place, on a regular basis.

The key factors in my evaluation of the truth of what I am told in this story all clearly relate to my background knowledge about the factors involved in the situation. Relevant background knowledge here includes my belief that Jane likely has a number of friends I don’t know about, my belief that Jane and Joe are generally honest people who have no reason to lie to me, and the belief that it is possible sometimes for people to visit a hospital without necessarily leaving an official record of their visit. Or that people sometimes go by nicknames that are not related to their legal names (so that “Joy’s” visit might have in fact been recorded, but under a different name—perhaps her real name is “Matilda,” and she goes by something different quite simply because she hates the name).

_______ ~.::[༒]::.~ _______

2.1

But now, suppose that rather than telling me that a friend named Joy came by the hospital and told her about the shoe sitting on a ledge on a different floor of the hospital, Jane tells me that she went out–of–body during a near–death experience during her operation in which she spotted the location of the shoe (or suppose that Joe relays to me second–hand that Jane made this claim). As before, I can neither positively prove nor positively refute the claim that this actually occurred. How, then, should I evaluate the likelihood that this is true, and what (if anything) makes this situation different from the one before?

Many investigators would hold this claim to a tremendously higher standard than they would hold against the claim that someone named Joy had visited and relayed this information to Jane—they would say that if it’s even conceivable that Jane could have obtained this information some other way (or that Joe might be lying to me), I shouldn’t even consider believing it for a second, and I would be absolutely foolish to do so. What (if anything) justifies that being the case here if it is not the case when Jane tells me that it was Joy (who I have likewise never seen for myself) who told her about the shoe?

This isn’t a statement that the skeptics themselves will protest: skeptics quite simply do not put this possibility on equal grounds with the alternatives. When skeptics address near–death experiences, they generally don’t accept a need to prove that some particular alternative explanation is true—they only see the need to show that an explanation of some other kind is conceivable; whereas the proponent of the NDE explanation is expected to definitely prove that the NDE explanation is the only possible explanation for what took place. If this is justified, it can only be justified because relevant background considerations justify it. And what are these background considerations?

Once again, the background consideration here is the philosophical conviction that the blind motion of the physical processes of the brain produces subjective conscious experiences merely as a secondary, epiphenomenal byproduct. I have given my reasons for considering this conviction not only misguided but preposterous repeatedly. And yet, the only justification ever actually given in attempted support for it is the fact that, at least in ordinary circumstances, there are correlations between the objective, quantifiable state of the physical brain and the subjective, qualitative state of the conscious subject’s experience—and these are exactly the correlations which, we have seen, appear to fall apart in the case of the near death experience.

There is a correlation between the event of flipping a light switch and the event of a light bulb turning off and on—but it simply doesn’t follow that the existence of the light bulb—or the existence of light itself—is a product of (much less identical to) the motion of light switches. If we shatter a glass prism, the visible light spectrum will disappear; but it doesn’t follow that the structure of the prism is identical to the visible light spectrum—nor even that the prism “produces” it: the prism simply allows what is already present within the white light which enters it to become visible.  Pressing the keys on an organ will occasion our hearing sounds, but the air which is actually responsible for these sounds is neither identical to the activity of nor strictly produced by the keys—the keys work by releasing air which is already present inside of the air–chamber. The argument established across thousands of words throughout this series has been that the idea that consciousness and the brain are identical is plainly false without either an radical eliminativist redefinition of consciousness (false for one set of reasons) or a radical panpsychist redefinition of matter (false for another set of reasons); and the idea that consciousness is produced by the brain would have to entail epiphenomenalism (false yet again for its own set of reasons). I can only insist to readers unconvinced that the issues can’t reasonably be summarized, and it would take careful consideration of the points discussed across this series to understand why I think this is unavoidably and absolutely true: not only are there plenty of viable alternatives which account for the interrelationship between physical states of the brain and subjective states of consciousness every bit as effectively as the  “identity” or “productive” theories, the “identity” and “productive” theories are in my view absolutely definitively not in fact even potentially viable accounts of that relationship at all.

In a lecture presented to Harvard University in 1898 (which I previously excerpted from here), William James said: “Suppose … that the whole universe of material things-—the furniture of earth and choir of heaven—should turn out to be a mere surface–veil of phenomena, hiding and keeping back the world of genuine realities. … Suppose … that the dome, opaque enough at all times to the full super–solar blaze, could at certain times places grow less so, and let certain beams pierce through into this sublunary world. …Only at particular times and places would it seem that, as a matter of fact, the veil of nature can grow thin and rupturable enough for such effects to occur. But in those places gleams, however finite and unsatisfying, of the absolute life of the universe, are from time to time vouchsafed. … Admit now that our brains are such thin and half–transparent places in the veil. What will happen? Why, as the white radiance comes through the dome, with all sorts of staining and distortion imprinted on it by the glass, or as the air now comes through my glottis determined and limited in its force and quality of its vibrations by the peculiarities of those vocal chords which form its gate of egress and shape it into my personal voice, even so the genuine matter of reality, the life of souls as it is in its fullness, will break through our several brains into this world in all sorts of restricted forms, and with all the imperfections and queernesses that characterize our finite individualities here below.

According to the state in which the brain finds itself, the barrier of its obstructiveness may also be supposed to rise or fall. It sinks so low, when the brain is in full activity, that a comparative flood of spiritual energy pours over. At other times, only such occasional waves of thought as heavy sleep permits get by. And when finally a brain stops acting altogether, or decays, that special stream of consciousness which it subverted will vanish entirely from this natural world. But the sphere of being that supplied the consciousness would still be intact; and in that more real world with which, even whilst here, it was continuous, the consciousness might, in ways unknown to us, continue still. You see that, on all these suppositions, our soul’s life, as we here know it, would none the less in literal strictness be the function of the brain. The brain would be the independent variable, the mind would vary dependently on it. But such dependence on the brain for this natural life would in no wise make life behind the veil impossible.”

One way or another, the experiences had by those who approach death are perfectly compatible with James’ picture.

_______ ~.::[༒]::.~ _______

2.2

One of the most intriguing elements of the near death experience are the many directly corroborated reports that, in fact, events like the one previously discussed actually do, in fact, happen. The story just told is one directly confirmed in a book published in 1995 by a first–hand witness: the social worker Kimberly Clark, who—initially skeptical—decided to look for that shoe, so as to placate the patient, only to be surprised to find a blue shoe in exactly the condition which “Maria” had claimed it was in. (Her report of the event can be read here: according to her direct testimony, the shoe was not visible from the ground, and there was no way “Maria”—“literally plugged into the wall,” she writes—could have moved. And it seems horribly cynical to resort to arguing that Maria must have seen the shoe on the ride in, and saved the observation for exploitation later). Later, Kimberly Clark became a co–founder of the Seattle division of the International Association for Near Death Studies (IANDS).

While it was true in the past that few cases of this kind were particularly well–corroborated, today there are multiple cases where first–hand witnesses have recorded their observations of such instances of “veridical perception” in print, describing the transformation of their skepticism and surprise into conviction; enough that were this an ordinary event, we would have more than accepted its reality. The only reason for skepticism remaining is an a priori designation of the probability of such an event being possible as incredibly low on the basis of nothing other than the philosophical assumption that conscious experience can be only the epiphenomenal byproduct of physical brain activity and nothing more. In another case of cardiac arrest discussed by Pim van Lommel (here), a subject was discovered lying in a meadow for at least a full half an hour prior to his arrival at the emergency room, in a state of coma and cyanosis. Yet, a CCU nurse reported that, days later, he was able to provide accurate descriptions of many of the specific, unexpected circumstances of his transfer to the hospital.

As van Lommel presents his report: “During night shift an ambulance brings in a 44–year old cyanotic, comatose man into the coronary care unit. He was found in coma about 30 minutes before in a meadow. When we go to intubate the patient, he turns out to have dentures in his mouth. I remove these upper dentures and put them onto the ‘crash cart.’ After about an hour and a half the patient has sufficient heart rhythm and blood pressure, but he is still ventilated and intubated, and he is still comatose. He is transferred to the intensive care unit to continue the necessary artificial respiration. Only after more than a week do I meet again with the patient, who is by now back on the cardiac ward. The moment he sees me he says: ‘O, that nurse knows where my dentures are.’ I am very, very surprised. Then the patient elucidates: ‘You were there when I was brought into hospital and you took my dentures out of my mouth and put them onto that cart, it had all these bottles on it and there was this sliding drawer underneath, and there you put my teeth.’ I was especially amazed because I remembered this happening while the man was in deep coma and in the process of CPR. It appeared that the man had seen himself lying in bed, that he had perceived from above how nurses and doctors had been busy with the CPR. He was also able to describe correctly and in detail the small room in which he had been resuscitated as well as the appearance of those present like myself.” (You can read the full interview here, and see a response to a skeptic’s criticisms here).

In yet another case reported in a video interview with cardiac surgeon Dr. Lloyd Rudy, a patient once again reported accurate and unusual details of events occurring prior to and during resuscitation efforts: ““it was close to 20, 25 minutes that this man recorded no heartbeat, no blood pressure, and the echo showing no movement of the heart—just sitting. And all of a sudden we looked up, and this surgical assistant had just finished closing him, and we saw some electrical activity. Pretty soon, the electrical activity turned into a heartbeat. Very slow, 30 or 40 a minute … he recovered. And for the next ten days, two weeks, all of us went in and were talking to him about what he experienced, if anything. And he talked about the bright light … but the thing that astounded me was that he described that operating room, floating around, and saying ‘I saw you, and [the other dotor] in the doorway with your arms folded, talking; I didn’t know where the anesthesiologist was, but he came running back in; and I saw all of these post–its sitting on this TV screen’—and what these were, any call I got, the nurse would write down who called and the phone number, … and the next post–it would stick to that post–it … he described that. There’s no way he could have described that before the operation—because we didn’t have any calls.”

In addition to direct studies of recall of NDE memories, cases like these all go a long way to discredit the skeptical counterclaim that near death experiences don’t really happen during periods of clinical death, but are only reconstructed afterwards. Ring & Lawrence (1993) record three other cases of “veridical perception” which were corroborated by first–hand witnesses. Bruce Greyson investigated yet another case where a patient described one of the surgeons “flapping his arms as if trying to fly.” As he summarizes: “Both the surgeon and the cardiologist in this case confirmed that … to keep his hands from touching any surface between the time he “scrubs in” and the time he actually begins the surgery, he has developed the habit of holding his hands against his chest and pointing with his elbows to give instructions to other persons in the operating room. The cardiologist confirmed that Mr. Sullivan had described this unusual behavior to him shortly after regaining consciousness following the surgery.” [3]

One of the only attempts to study these reports directly was a study performed by Michael Sabom in 1982. Initially a skeptic inspired to investigate by Raymond Moody’s 1975 Life After Life, Sabom took 32 patients who reported out–of–body perceptions during near death experiences, and compared them to a control group of 25 patients who had had one or more episode of cardiac arrest without a near death experience. He asked the NDE group to describe their out–of–body perceptions, and compared these accounts to the control group’s attempts to describe their resuscitations. If the NDE group were no more accurate in their descriptions than the control group, this would lend plausibility to the idea that these accounts could possibly have been reconstructions produced after the fact, rather than truly veridical perceptions at the time supposed.

The results? Whereas 20 out of 23 who attempted the task in the control group made at least one major error, no members of the NDE group did—and furthermore, 6 members of the NDE group accurately recorded specific unusual details, some of which were peculiar to that patient’s own personal case. For example, one man who developed ventricular fibrillation described how a nurse picked up “them shocker things” and “touched them together,” before “everybody moved back away from it.” As Sabom explains (p.98), rubbing the paddles together to lubricate them and then standing back to avoid being shocked is a common procedure. Others talked about which family members were or weren’t in the waiting room, or the type of gurney that was used to wheel them in to the hospital. A nurse, Penny Sartori, whose experiences over 17 years working in intensive care units inspired her to turn to research on the near death experience (for which she was awarded a Ph.D), replicated his findings and recorded the results in her 2008 monograph, The Near–Death Experiences of Hospitalised Intensive Care Patients: A Five Year Clinical Study.

In a 2009 study also recorded in The Handbook of Near Death Experiences published with Bruce Greyson, Janice Miller Holden finds that of 93 cases of “veridical perception” reported in the literature on near death experiences, 40 were able to be verified as corroborated by an independent witness; 40 were reported by the experiencer to have been corroborated by an independent witness who was no longer available; and only 13 relied solely on the experiencer’s report. Furthermore, of all of these cases, 86 were found to be completely accurate, 6 were only partially corroborated or had some errors; and only the one remaining case was completely inaccurate.

There may not be the type of evidence here that counts as “proof” of the kind required to completely convince a skeptic who wants to know that absolutely no other conceivable explanation is even hypothetically possible before accepting that the realist interpretation of the near death experience could be a reasonable conclusion (nothing could actually meet this burden to begin with—as a last resort, a skeptic who is determined enough can simply dismiss the validity of every report, or every witness’ credibility or memory), but there is as much evidence as we could possibly expect to have given the extent to which the phenomena has actually been capable of being studied at all—and it is certainly enough to shift things even farther in the direction of putting the skeptic in a “platypus is a hoax”–type position, as we continue to add more and more evidence to the picture which the skeptic must find some way to explain away despite the fact that the realist interpretation obviously unifies, in a single explanation, all of it.

_______ ~.::[༒]::.~ _______

2.3

Individuals who are blind from birth apparently do not have visual dreams. A 1999 review of 372 dreams in 15 individuals at the University of Hartford confirmed this, while finding that those who go blind before the age of 5 are mostly indistinguishable from the blind from birth, whereas those who lose their sight around the age of 7 or later “continue to experience at least some visual imagery, although its frequency and clarity often fade with time” (those who lose their sight between the ages of 5 and 7 can go either way).

Yet, in the book Mindsight, researchers Kenneth Ring and Sharon Cooper document their studies on experiences in the blind—who report near–death experiences exactly like those reported by the sighted, apparently with the same visual content. The authors quote from a recorded interview between one of their subjects—Vicki Umipeg—and another researcher, Greg Wilson: (GW: “Could you see anything?”) Vicki: “Nothing, never. No light, no shadows, no nothing, ever.” (GW: “So the optic nerve was destroyed to both eyes.”) Vicki: “Yes, and so I’ve never been able to understand even the concept of light.” As she described her experience: “I was pretty thin then. I was quite tall and thin at that point. And I recognized at first that it was a body, but I didn’t even know that it was mine initially. Then I perceived that I was up on the ceiling, and I thought, ‘Well, that’s kind of weird. What am I doing up here?’ I thought, ‘Well, this must be me. Am I dead?’ I just briefly saw this body, and … I knew that it was mine because I wasn’t in mine.”

She continued: “I think I was wearing the plain gold band on my right ring finger and my father’s wedding ring next to it. But my wedding ring I definitely saw … That was the one I noticed the most because it’s most unusual. It has orange blossoms on the corners of it. This was the only time I could ever relate to seeing and to what light was, because I experienced it.

It seems strange to suppose that reductions in the intensity of brain activity might be accompanied at some times by a reduction in the intensity of subjective experience, and at other times by an increase; or that some damage to the eyes might damage or obliterate the subjective experience of sight, whereas death should restore it—but a dualistic interpretation of the mind–body relationship can accommodate correlations in both of these directions, whereas a physicalist interpretation requires that they be in only one direction at all times. Suppose I am sitting inside of a theater, with a screen interpreting visual data from outside the building I am in, the speakers interpreting auditory data from outside, and so on: some subtle damage to the machinery of the theater’s visual processing system might destroy my ability to “see what is outside” completely—and yet, bashing down one of the walls would, nonetheless, influence my capacity to “see what is outside” in the opposite direction.

In a 1997 publication in the Journal of Near Death Studies, Cooper and Ring provide a more succinct presentation of their research: “Of our 21 NDErs, 15 claimed to have had some kind of sight, three were not sure whether they saw or not, and the remaining three did not appear to see at all. All but one of those who either denied or were unsure about being able to see came from those who were blind from birth, which means that only half of the NDErs in that category stated unequivocally that they had distinct visual impressions during their experience. Nevertheless,it is not clear by any means whether those respondents blind from birth who claimed not to have seen were in fact unable to, or simply failed to recognize what seeing was. For instance, one man whom we classified as a nonvisualizer told us that he could not explain how he had the perceptions he did because “I don’t know what you mean by ‘seeing.’”

As would be expected if these subjects were actually experiencing sight for the first time, even those who readily classified what they experienced as “sight” expressed bafflement—or even fear. Vicki Umipeg stated to interviewers: “I had a real difficult time relating to [sight] because I’ve never experienced it. And it was something very foreign to me. … Let’s see, how can I put it into words? It was like hearing words and not being able to understand them, but knowing that they were words. And before you’d never heard anything. But it was something new, something you”d not been able to previously attach any meaning to.” Ring notes that she later used the word “frightening” to describe the adjustment, and records that she described her ability to distinguish between “different shades of brightness,” and could only wonder if this was what sighted people mean by “color.”

Not only does the perceptual experience of sight occur during near death experiences in the blind; so, too, do cases of apparently veridical perception. They write: “[I]n at least some instances, we are able to offer some evidence, and in one case some very strong evidence, that these claims are in fact rooted in a direct and accurate, if baffling, perception of the situation.” After discussing another fascinating case that turned out to lack perfect verification (but which I think one could still reasonably believe—the witness who was recovered just couldn’t recall from memory the pattern on a piece of clothing identified by the patient to confirm their report), they move on to the case of “Nancy,” (see p.22).

Nancy “underwent a biopsy in 1991 in connection with a possible cancerous chest tumor. During the procedure, the surgeon inadvertently cut her superior vena cava, then compounded his error by sewing it closed, causing a variety of medical catastrophes including blindness, a condition that was discovered only shortly after surgery when Nancy was examined in the recovery room. She remembers waking up at that time and screaming, “I’m blind, I’m blind!” Shortly afterward, she was rushed on a gurney down the corridor in order to have an angiogram. However, the attendants, in their haste, slammed her gurney into a closed elevator door, at which point the woman had an out–of–body experience. Nancy told us she floated above the gurney and could see her body below. However, she also said she could see down the hall where two men, the father of her son and her current lover, were both standing, looking shocked. She remembers being puzzled by the fact that they simply stood there agape and made no movement to approach her. Her memory of the scene stopped at that point.”

They continue: “In trying to corroborate her claims, we interviewed the two men. The father of her son could not recall the precise details of that particular incident, though his general account corroborated Nancy’s, but her lover, Leon, did recall it and independently confirmed all the essential facts of this event. …  It should be noted that this witness has been separated from our participant for several years and they had not even communicated for at least a year before we interviewed him. Furthermore, even if Nancy had not been totally blind at the time, the respirator on her face during this accident would have partially occluded her visual field and certainly would have prevented the kind of lateral vision necessary for her to view these men down the hall. But the fact is, according to indications in her medical records and other evidence we have garnered, she appeared already to have been completely blind when this event occurred. …”

And then, quoting from Leon’s account: “I was in the hallway by the surgery and she was coming out and I could tell it was her. They were kind of rushing her out. … I saw people wheeling a gurney. I saw about four or five people with her, and I looked and I said, ‘God, it looks like Nancy,’ but her face and her upper torso were really swollen about twice the size it should have been. I was still in a state of shock. I mean, it had been a long day for me. You’re expecting an hour procedure and here it is, approximately 10 hours later and you don’t have very many answers. … When I first saw her she was probably, maybe about 100 feet and then she went right by us. … somebody was, like, trying to get into the elevator at the same time and there was some sort of a ‘Oh, I can’t get in, let’s move this over a little bit,’ kind of adjusting before they could get her into the elevator. But it was very swift … She was just really swollen. She was totally unrecognizable. I mean, I knew it was her but—you know, I was a medic in Vietnam and it was just like seeing a body after a day after they get bloated. It was the same kind of look.”

They conclude the paper from pp.24–46 with a discussion of implications, asking whether apparent sight during NDEs in the blind could be accounted for by some other means, such as blindsight: “First of all, patients manifesting [blindsight] typically cannot verbally describe the object they are alleged to see, unlike our respondents who, as we have noted, were usually certain about what they saw and could describe it often without hesitation. In fact, a cortically blind patient, even when his or her object identification exceeds chance levels, believes that it is largely the result of pure guesswork. Such uncertainties were not characteristic of our respondents. … perhaps most crucially of all, blindsight patients, unlike our respondents, do not claim that they can ‘see’ in any sense. As Humphrey wrote: ‘Certainly the patient says he does not have visual sensation. …Rather he says, ‘I don’t know anything at all—but if you tell me I’m getting it right I have to take your word for it’’ (1993, p. 90). This kind of statement is simply not found in the testimony of our respondents who, on the contrary, are often convinced that they have somehow seen what they report. Thus, the blindsight phenomenon, however fascinating it may be in its own right, cannot explain our findings.”

In any case, whatever alternative mechanism one might possibly propose for these examples, all participants describe the experience as being radically unlike anything else they have ever experienced. Vicki, for example, explicitly says that there is “No similarity, no similarity at all” between the sight she experienced during her near death experience and her dreams (which she describes as containing no visual imagery). Whatever these mechanisms might be, why should they become active only when the blind patient is approaching death and their brain is in the most disrupted, disorganized state it can be besides actual, irreversible death? Once again, the skeptic can insist on finding loopholes—no matter what premise an individual wants to hold to, if he intends to hold to it against all odds, then in many cases no argument in principle will be capable of convincing him—he can simply modus tollens the premises of any argument meant to defeat that premise. Regardless of whether what we have in these cases is “proof” in the requisite sense, I think it is clear that we have still yet more evidence that renders belief in the reality of these experiences still yet more reasonable—for, once again, the skeptic must produce still yet more ad hoc hypotheses to explain away the platypus, whereas the conclusion that these experiences are simply what they appear to be can easily account for all of it at once. And on both that basis as well as on the basis of overwhelming philosophical problems facing any attempt—which so far has, by even the admission of materialists themselves [2] come nowhere near to completion in the first place—to reduce consciousness to mechanism in principleI can only conclude that belief in the reality of the near death experience is entirely justifiable and reasonable, whether every imaginable alternative explanations can be definitively proved categorically inconceivable or not—as this is simply an illegitimate standard to impose on the question of whether or not a conclusion can be considered justifiable and reasonable.

_______ ~.::[༒]::.~ _______

2.4

One final supplementary point: suppose the near death experience occurs precisely when it appears to. We have good reason to believe that it does in the existence of cases of veridical perception referenced above—and these would count as compelling evidence that the experience occurs at precisely the time it appears to even if the experience were to turn out to be a pure hallucination of some sort after all; for at the very least, the hallucination would be occurring, and somehow incorporating these perceptual details, at the time of clinical death and not after. Consider the way near death experiencers so widely report being deliberately “sent back” by the figures they encounter in the experience before it’s over. If the near death experience is simply the ‘hallucinatory phantasmagoria’ of a dying brain, how does it know to build this into the narrative of the vision from a state of severely impaired near–unconsciousness in advance of the actual resuscitation? Every single person reading this knows that even our dreams don’t typically end through any sequenced narrative marking our transitioning into wakefulness—they usually just end. At most, we might be familiar with falling asleep in a vehicle and watching our dream incorporate something like a face–first trip on a branch in the woods as we snap awake in response to riding over a particularly jarring bump. But few people ever have an experience of anything like the characters in their dream explaining to them in an elaborate narrative how it’s approaching time to wake up. And this, despite the fact that (1) our brains are not severely physiologically impaired during dreaming, and (2) the process of waking is usually more or less led and managed by the same brain conducting the dream—so it should be far more capable here than in the case of resuscitation from death of coordinating the contents of the dream with reality in advance. How, then, should the brain suddenly acquire the ability to synchronize its hallucinations with reality so far in advance—with no one reporting that they came to consciousness out of a near death experience before the figure could actually finish sending them back into their bodies, mid–sentence?

_______ ~.::[༒]::.~ _______

[1] Best Evidence: 2nd Edition by Michael Schmicker, which cites John C. Gibbs,
Moody’s Versus Siegel’s interpretation of the near-death experience: An evaluation based on recent research.

[2] Paul Churchland: “Consciousness is almost certainly a property of the physical brain. The major mystery, however, is how neurons achieve effects such as being aware of a toothache or the smell of cinnamon. Neuroscience has not reached the stage where we can satisfactorily answer these questions.” 

Francis Crick: “What remains is the sobering realization that our subjective world of qualia—what distinguishes us from zombies and fills our life with color, music, smells and other vivid sensations—is possibly caused by the activity of a small fraction of all of the neurons in the brain, located strategically between the inner and outer worlds. [But] how these act to produce the subjective world that is so dear to us is still a complete mystery.”

Even though these authors profess confidence (and quite definitely resounding confidence elsewhere in other writings) that consciousness is “produced” by the processes of blind physical mechanism in the brain, they confess—in more honest moments—that they have no idea “how” this could be the case. Leaving aside the arguments I’ve stood by throughout this series that this very concept is simply confused in principle, how does someone justify claiming that they know an empirical claim is true without having any idea “how?”  Making analogy with the dualist’s claim that interaction takes place is unfounded for reasons I explain.

[3] I need donations of my own here just to try to survive—but if you’re interested in finding out more about these cases, consider supporting the effort to translate Titus Rivas (et al.)’s work compiling more than 80 new verified cases of corroborated perception into English from Dutch at the International Association of Near Death Studies (IANDS).

Consciousness (XII) — From Chalmersian “Laws” to Transmigration

_______ ~.::[༒]::.~ _______

1.

Questions about the ontological status of “laws” feature prominently in many debates between atheistic and theistic philosophers. In his 1859 Treatise on Theism, Francis Wharton (the author of Wharton’s Rule of Concert of Action which states that guilt of conspiracy to commit a crime requires more parties than are necessary to commit it) writes: “The existence of a comprehensive and beneficent system of law, in fact, is the strongest evidence of the existence of a Divine lawmaker… There is a vital distinction between a causal law, i.e. one that rules the genesis of events, and an empirical law, one that merely registers their occurrence. There is a vital distinction, for instance, between the time–tables issued from period to period by the officers of an extended railroad and the systematized observation of running by even a long and accurate series of travelers. The records of the latter are open to error… let a traveler rely on the latter, and he will find that though in a mere statistical point of view the results, like empirical laws in general, are interesting as helps to the memory, and useful as the base for business tables, they are in themselves of no permanent and absolute value as indications of the future. … The results of empirical observation are, therefore, incapable of becoming permanent laws for the future.”

Emanuel Haldeman–Julius (founder of Haldeman–Julius Publications) and Rev. Burris Jenkins debated the question in a 1930 debate titled “Is Theism a Logical Philosophy?” In the negative argument, Haldeman–Julius writes: “The fundamental error is found in the theist’s habit of confusing a human law with a natural “law.” A legislature passes a law saying that after a certain date it shall be illegal to behave in a certain way, to have liquor, for instance. If you break this law, and are not caught, nothing happens except the usual next morning headache. If you are caught, you may be sent to the penitentiary. Or let us say that the people make up their minds to break the law so flagrantly that enforcement falls down and the law is either ignored or repealed. That is a human law. That implies a lawmaker, of course. But it is treacherous logic to say the “laws” of nature are the result of the will of a lawmaker. The scientific use of the word “law” as applied to nature means only this: things in nature act in certain ways — their movements are Uniform — and when you use the word “law” you merely describe how things are observed to conduct themselves.”

In the modern day, the theistic philosopher Keith Ward writes: “The existence of laws of physics does not render God superfluous. On the contrary, it strongly implies that there is a God who formulates such laws and ensures that the physical realm conforms to them.” Bede Rundle, in an atheistic response in ‘Why There is Something Rather than Nothing,’ writes that: “[I]t is wrong to regard laws of Nature as basic. That status goes to whatever it is—the characteristics and behaviour of particles, gases, and so forth—that the laws codify. Indeed, the notion of a natural or physical law, or at least the use to which this is put, is often questionable. Not because there is no place for the notion, but because those who insist on the reality of such laws tend to model them on legal laws, as if the natural variety likewise enjoyed an independence of the actual behaviour of individuals, to the point even of antedating and dictating that behavior. … it is not as if God might rewrite the laws of Nature and inanimate things, being now differently governed, would thereupon proceed to behave differently—though just some such view was in no way foreign to the seventeenth–century conception of laws of Nature as divine commands. With legal laws there is an intelligible relation between the law and behaviour: understanding a law, and having a motive to act in accordance with it, we act. Substitute inanimate bodies for comprehending agents and we sever any such intelligible tie; the law is in no sense instrumental in bringing about accord with it. … What would God have to do to ensure that atoms, say, behave the way they do? Simply create them to be as they in fact are.  Atoms having just those features which we currently appeal to in explaining the relevant behaviour, it does not in addition require that God formulate a law prescribing that behaviour. Again, the point is addressed by David Marshall Brooks, in The Necessity of Atheism: “A “law” of nature is not a statute drawn up by a legislator; it is the interpretation and the summation which we give to the observed facts. The phenomena which we observe do not act in a particular manner because there is a law; but we state the “law” because they act in that particular manner. [So] it cannot be said that the laws of nature are the result of a lawmaker….”

John Lennox writes in God’s Undertaker: Has Science Buried God? that: “We certainly expect to be able to formulate theories involving mathematical laws that describe natural phenomena, and we can often do this with astonishing degrees of precision. However, the laws that we find cannot themselves cause anything. Newton’s laws can describe the motion of a billiard ball, but it is the cue wielded by the billiard player that sets the ball moving, not the laws. The laws help us map the trajectory of the ball’s movement in the future (provided nothing external interferes), but they are powerless to move the ball, let alone bring it into existence.” He even makes the interesting note that “the much maligned William Paley” recognized the point: “It is a perversion of language to assign any law, as the efficient, operative cause of any thing. A law presupposes an agent; for it is only the mode, according to which an agent proceeds: it implies a power; for it is the order, according to which that power acts. Without this agent, without this power, which are both distinct from itself, the law does nothing; is nothing.”

Likewise, C. S. Lewis writes in Chapter 1 of Mere Christianity, that “When you say that falling stones always obey the law of gravitation, is not this much the same as saying that the law only means “what stones always do”? You do not really think that when a stone is let go, it suddenly remembers that it is under orders to fall to the ground. You only mean that, in fact, it does fall. In other words, you cannot be sure that there is anything over and above the facts themselves, any law about what ought to happen, as distinct from what does happen. The laws of nature, as applied to stones or trees, may only mean “what Nature, in fact, does.”.” In the atheist Richard Carrier’s response to Victor Reppert’s presentation of one of C. S. Lewis’ arguments in C. S. Lewis Dangerous Idea, he writes: “The “law” of gravity … ‘is’ in every place and time where the physical conditions that manifest gravity exist.” A thousand more examples await anyone who searches a phrase like “laws lawgiver atheism Christianity.” Keep this point in mind and try not to get lost in the chaos of changing topics—the reason for it will eventually become clear: “laws” are only our descriptions of what actually existing things do. By and large, the atheist’s only option is to say that the actually existing things whose behavior are physical objects and forces themselves. The theist can (though does not necessarily) adopt a kind of idealism and say that the “agent” responsible for the law is in fact the ordering power of the mind of God—and not the intrinsic properties of physical objects and forces themselves—but to avoid this possibility, the atheist’s only option—again—is to contend that it is the intrinsic properties of physical objects and forces themselves which are actually directly responsible for the behaviors which we label, after the fact, in the terminology of “laws.”

_______ ~.::[༒]::.~ _______

2.

This won’t, perhaps, be the most efficient way for me to make the following point, but I’d like to illustrate it this way for a reason. Consider, for a moment, the Kalām cosmological argument. Kalām attempts to proceed from the premise that the Universe began to exist (supported by empirical premises acquired from Big Bang cosmology, and philosophical premises regarding paradoxes implying that actual infinites—such as an actually infinite past—cannot possibly exist in reality) and the premise that anything that begins to exist must have a cause, to the conclusion (supported by a variety of further considerations) that the cause of the Universe must be “God” (a changeless, timeless, singular disembodied mind). What are the most plausible ways out of this argument for the atheist?

Some will simply reject the argument that everything that begins to exist must have a cause—usually, this is done by arguing that the initial coming into existence of the Universe is a special case because, as Gott, Gunn, Schramm, and Tinsley write: “…time [itself was] created in that event … [so] it is not meaningful to ask what happened before the big bang; it is somewhat like asking what is north of the North Pole.” For my part, I simply cannot bring myself to find this approach even slightly plausible. It is incoherent to ask what is north of the North Pole, but it is not incoherent to ask what is above the North Pole—there is something above it even if there isn’t something “north” of it, and if someone were to ask what was “north” of the North Pole, this would, in all probability, be what they actually meant. Notice that in this statement, Gott, Gunn, Schramm and Tinsley themselves cannot escape causal language: “time,” they write, “[was] created.” While it should go without saying that the ways in which we happen to use language don’t necessarily entail any particular philosophical truths, I think in this case it reflects the fact that we simply can’t coherently think in any other but causal and temporal terms—and I can only infer that this is because in this case the idea simply is as incoherent as it would be to say that the North Pole exists with nothing “above” it (not even space?). Critics of this approach make the further point that we can easily see that it isn’t a logical necessity that all causes precede their effects “within time”—for example, Kant famously asked his readers to imagine a Universe where a heavy ball sat on a cushion from the moment that Universe came to exist: the ball would be the cause of the depression in the cushion even though the ball (or the pressure from the ball) did not precede the cushion (or the depression within the cushion) in any temporal sequence of events within that Universe—so that even if it were meaningless to ask what came “before” the Big Bang, this still wouldn’t render it meaningless to ask what caused it.

More plausibly, then, the atheist can attack the premise that an infinite past is either logically impossible or scientifically ruled out by modern cosmology. For example, on some multiverse cosmologies, a quantum void of some sort could be the “heart” of all Existence, the changeless center–point from which every contingent, changing universe is born—through quantum physical mechanisms rather than the intentional conscious acts of a God. (Note as well that the very fact that these hypotheses exist already shows that it is not unmeaningful to ask what preceded the Big Bang. We do not know that nothing came before the Big Bang—there are competing hypotheses, and the very idea that the “Big Bang was the objective beginning” scenario should be preferred itself requires the assumption that the notion is coherent to begin with. Asking what happened before the Big Bang is more like asking what is above the highest building we can see than it is like asking what is “north” of the North Pole—if the answer is “nothing,” then that is surprising, and someone who gives this answer has as much of a burden to advance the truth of the claim with a proactive argument as anyone else. If the Big Bang is truly the first moment of “objective time,” then asking what came before it may be like asking what is “north” of the North Pole—but it would have to be demonstrated that we cannot, for that, nonetheless coherently ask the equivalent of whether there is nonetheless something “above” the North Pole. We do not know that the Big Bang is truly the first moment of “objective time”—indeed, it is hard to see what empirical discovery could ever qualify as confirming absolutely that we know we’ve found the first moment of Time Itself—and part of the very question of whether the inference to that interpretation of our historical–cosmological knowledge is viable requires the premise that a state of affairs where there is nothing “above” a given point in space—nothing “before” a given point in time—is a coherent notion in the first place. If we have a priori reason to think that it is not, then that a priori consideration provides a constraint against what an accurate description or explanation will have to look like in principle.) Let me emphasize that nothing peculiar rests on this being my personal perspective on Kalām—I simply want to use it for an analogy in a point I will tie in much later.

So in any case, look what has happened here: if we take this approach, then we have concluded that the physical universe itself must be eternal in order to escape the need to explain its coming–to–be in a way that might entail the need to account for it with reference to something non–physical—even though we cannot confirm its eternality (any more than we could confirm its finality) ‘empirically.’ This is, in my view, a legitimate way of reasoning—and I think if you reject it, then you can’t escape the force of the theistic conclusions that would otherwise follow.

Once again, keep this point in mind for later as we now move on and try not to get lost in seeming chaos as the subject once again proceeds through another quite drastic change: the most plausible route for the atheist through the Kalām cosmological argument (in my estimation) is to insist that the beginning of time simply doesn’t need to be accounted for with some further explanation because it had no beginning—time is eternal, and the past is infinite. Even if my estimation of this argument is wrong, the analogy will still be relevant, because it at the very least could have come out that this would have been the most reasonable way to think about Kalām.

_______ ~.::[༒]::.~ _______

3. 

Perhaps the single biggest problem with Chalmers’ philosophy is that it entails epiphenomenalism. My arguments will be to the conclusion that a much more substantialist and interactionist view of consciousness than Chalmers’ is the only way to avoid these implications. Chalmers approach to getting out of the threat that epiphenomenalism poses to the validity of his view is to argue that “what is a problem for all is a problem for none”—namely, to try to say that epiphenomenalism is a threat to the interactionist account as well, in just the same way, so it therefore poses no special problem for his view in particular. I’m going to take the liberty of stating that Chalmers is patently wrong on this point. Contra Chalmers, epiphenomenalism does pose a specific peculiar threat to his view which it absolutely does not for an interactionist view. And every fundamental issue regarding the nature of human consciousness from this point forward intimately turns on this one issue about which Chalmers is unequivocally mistaken.

“What is a problem for all is a problem for none” is a valid approach, if the reasoning underlying it is actually valid—but in this case, Chalmers’ reasoning quite plainly is not. Ironically, he realizes his own mistake within the very paragraphs in which he presents this argument—and then steps back and repeats it anyway. We’ll begin to see how significant the consequences of correcting this mistake are soon. He writes: “…All versions of interactionist dualism have a conceptual problem that suggests that they are less successful in avoiding epiphenomenalism than they might seem….” Why? Because “ … even on these views, there is a sense in which the phenomenal is irrelevant.” And what sense is that? Experience is irrelevant even on interactionism, according to Chalmers, in the sense that we can always describe a sequence of events without including experience in that description: “We can always subtract the phenomenal component from any explanatory account, yielding a purely causal component.” Thus, consciousness on the interactionist’s account has “ … a sort of causal relevance but explanatory irrelevance.” This is the sole line of reasoning on which Chalmers’ decision to commit to epiphenomenalism despite its apparent problems rests: even if consciousness does in fact play a causal role in reality, we could talk as if it doesn’t—therefore, it doesn’t matter whether our theory allows that consciousness actually does play a causal role in reality or not. And so, “the denial of the causal closure of the physical therefore makes no significant difference in the avoidance of epiphenomenalism.

Chalmers’ reasoning on this point is uncharacteristically sloppy, and mired in straightforward and inexcusable confusion. It doesn’t matter whether we can talk as if consciousness plays no causal part in reality. What matters is whether or not it actually does. On Chalmers’ view, it doesn’t. On an interactionist account, it does. With respect to the threat of epiphenomenalism, that is everything that matters (and it matters tremendously). We will see that correcting this error has deep consequences for where the lines of reasoning I’ve been defending and arguing for here (some points of which are borrowed from Chalmers or at least take Chalmers as their starting point) ultimately end up taking us.

The question posed by epiphenomenalism is whether consciousness actually is a causally relevant feature of reality—and Chalmers does not actually deny that consciousness is causally relevant on the interactionist picture of reality. He merely suggests that the “explanatory irrelevance” which he claims consciousness has on interactionism is somehow just as bad as the causal irrelevance which consciousness has on his view. But what does “explanatory irrelevance” actually mean, here? What concept is Chalmers actually using that phrase to express? It means here that we can create a story and leave a given feature out of our description. But from the fact that we can create such a story, it does not follow that this “story” actually describes reality as it actually is. I can create a story of World War II that does not mention Hitler, or anti–semitism. Does that give Hitler, or anti–semitism, “a sort of causal relevance but explanatory irrelevance” with respect to the events of World War II? (What the hell, Chalmers?)

If an “explanation” is something that actually describes realitythen consciousness cannot have “explanatory irrelevance” in spite of “causal relevance”—period. An “explanation” that does not explain why reality actually is as it actually is is no “explanation” at all. “Causal relevance” would mean that consciousness as such is, in fact, part of the reality that we want to describe. From the fact that we can create false descriptions of reality, nothing of any significance—nothing about reality as it actually is—follows. On interactionism, any true description—that is, any actual “explanation”—will in fact be the one which keeps consciousness intact. I can describe reality without referring to consciousness—but so what? I could also describe World War II without mention of Hitler, or Judaism. I could also describe the phenomenon we call “gravity” without referring to the structure of space–time—by simply restricting myself to talk about material objects, and saying for example that “objects undergo an intrinsic pull to move towards the largest object closest to them.” Does it follow from this that the physical structure of space–time itself is “irrelevant” to reality in any meaningful way on a viewpoint that takes the gravitational force as such to be one of reality’s fundamentals? Absolutely not; and the suggestion would be, quite frankly, idiotic.

On an interactionist view of consciousness, “can” I give an account of any sequence of events which leaves the causal contributions of consciousness out of the story? Sure. But will that story be true? No—and that is the only point that matters. Supposing for a moment it were true that a God created the Universe,  I “could” in that case nonetheless give an account which leaves God’s irreducibly intentional concious act of creation out of the story. Would that make God “explanatory irrelevant” to the world’s creation and would this “explanatory irrelevance” therefore be just as good as atheism? No—because my “story” would be just that—a “story”; and not a real description of why reality actually is as it actually is. If an “explanation” is supposed to actually explain why things actually are how they actually are, then assuming God created the Universe, the account which left God out would not be an “explanation” at all—so if God did in fact create the Universe, God would not be “explanatorily irrelevant” to how the Universe was created. But on Chalmers’ view, when I give those accounts of sequences of human behavior which leave consciousness out of the causal story, those accounts are true—they are accurate descriptions of why what happened actually happened. Consciousness–as–such, on Chalmers’ account, is therefore “explanatorily irrelevant” because it is causally irrelevant. And Chalmers cannot weasel out of that by simply pointing out that we could tell a story which leaves consciousness out even if consciousness were in fact part of the correct story—the fact that we “could” tell that story is irrelevant. I “can” tell a story in which unicorns replaced the role ordinarily played by either God or the Big Bang singularity in the story of the creation of the Universe. The fact that I “can” tell such a story entails absolutely nothing whatsoever about reality except that I am capable of saying something that is untrue about reality. To suggest that that is in any way comparable to the scenario in which either God or a Big Bang singularity actually did in fact play no causal part in bringing the Universe as we know it into existence in reality is a shameless piece of confusion.

_______ ~.::[༒]::.~ _______

4. 

However, there is one legitimate basis—in Chalmers’ defense—for his having acquired this confusion: John Eccles, who Chalmers quotes in this section as the substantialist–interactionist example, takes some steps in drawing his account which inadvertently provide unintended support to an intuition which rests on a misunderstanding of dualism. Though Eccles himself seems not to actually commit the mistake in his own mind, he speaks in a way that doesn’t always leave this clear—and in doing so he does the public relations side of interactionism a disservice. Allow me to illustrate.

In 1989, John Eccles published a paper titled “A unitary hypothesis of mind–brain interaction in the cerebral cortex.” Much of the paper consisted of a legitimate argument showing that quantum physics allows viable room (or did given the most up–to–date knowledge of the time) for irreducibly conscious causation. But the way Eccles spoke of conscious causation is unfortunate and extremely amenable to a common conceptual confusion. He writes:   “The hypothesis has been proposed that all mental events and experiences, in fact the whole of the outer and inner sensory experiences, are a composite of elemental or unitary mental experiences at all levels of intensity. Each of these mental units is reciprocally linked in some unitary manner to a dendron … Appropriately we name these proposed mental units ‘psychons.’ … It may seem that in this intimate linkage of dendrons and psychons the new unitary hypothesis of dualist interactionism is merely a further refinement of the materialist identity hypothesis … [but] this is a mistake. Independence of existence is accorded to psychons….” It seems that this way of speaking suggests a model where psychons are analogous to dendrons at least in that they are discrete, quantifiably measurable units. And this naturally predisposes anyone trying to visualize Eccles’ “psychon–dendron interaction” on par with the mechanical type of interaction that takes place between dendrons and dendrons themselves, or between billiard balls on the very Newtonian picture of reality whose accuracy the rest of the paper denies.

On the one hand, Eccles seems to realize the hazards in his way of speaking—for he clarifies: “ Psychons are not perceptual paths to experiences. They are the experiences in all their diversity and uniqueness.”

Yet, on that same page, Eccles draws a picture of the connection between “dendrons” and “psychons.” And this is unfortunate, for it suggests yet again at least implicitly that consciousness is the kind of thing that could possibly be drawn. But anything that could be drawn would be, by definitiona physical–relational structure—and the arguments for dualism proceed largely precisely through illustrating the very fundamental insight that qualitative conscious experiences and intentionality can’t be analyzed or understood through descriptions of mechanistic operations between physical–relational structures in the first place. This makes even accidentally construing consciousness as something that could be analyzable in terms of the causal properties of discretely quantified units incredibly unhelpful—the fact that consciousness is not the kind of thing that could possibly be “graphed” in principle is exactly the very point that the dualist needs to insist until it ‘clicks’ in the mind of his opponent. While Eccles seems to have realized it, it doesn’t pervade his way of speaking or representing the principles he tries to talk about—and that lack spills over into Chalmers’ comprehension of what an interactionist consciousness would be like as well. The fact that we cannot, in principle, visualize interaction between subjective/qualitative consciousness and objective/quantitative physical structure is exactly the point Descartes emphasized to his critics all the way back then: “[the interaction problem] arise[s] merely from [critics’] wishing to subject to the scrutiny of the imagination matters which, by their own nature, do not fall under it.” Eccles simply should not have encouraged this already all–too–easy psychological tendency by trying to “imagine” (e.g., represent with images) the process of interaction even with caveats. 

All Eccles actually wanted to communicate in the section in which this drawing appears is that there is a causal link of some kind between the qualitative properties of a conscious experience and the physical states of the brain—which no one has failed to understand must necessarily be true in some way from the moment it was observed that blows to the head or intake of alcohol or bad food could alter the state of one’s consciousness. Very little is contributed to that point by speaking of “psychons” or implying even by inadvertent accident that the “psychons” composing our qualitative conscious experiences and intentionalistic conscious thoughts could actually be represented by a diagrammatic drawing of physical structures—but opportunity to emphasize something extremely crucial to the interactionist idea is lost.

Qualitatively subjective and intentionalistic consciousness is the very medium in which all our thoughts exist, and these qualitative, subjective, and intentionalistic properties are without exception the sole exclusive mode in which they have their existence. Yet, in all cases except for thinking about consciousness itself, when we turn our attention to causal relationships, we are overwhelmingly used to thinking of mechanical interactions between structures. This is exactly why getting an intuitive grasp on the mind–body problem is so hard—universal habit has it engrained in us in all other cases except this one to think of causation in terms of mechanistic procedures mediating structurally depictable forces and objects through space. However much Eccles may have tried to caveat his illustration with emphasis that it is not a depiction of mind–brain “identity”, Chalmers’ confusion is all the confirmation needed to show that Eccles—and those who defend the idea that consciousness as such is an irreducible and causally active component of reality all in its own right, in general—should go much farther to guard against these overwhelming psychological tendencies. To properly understand what interactionist dualism entails, we must guard at all times against the tendency to revert in habit back to depicting irreducibly qualitative, subjective, and intentionalistic consciousness’ interactions with the structures of the physical world by such close analogy with mechanical interactions between physical structures within the physical world. And as Eccles unfortunately slips back just far enough into this habit enough to make his explanations easily amenable to this very rampant confusion, the confusion is carried over by Chalmers who now proceeds to take the structures Eccles has used to represent the fundamentally non–structural and point out that we could have these same structures perform their structural work without their needing to be conscious at all—and this is how Chalmers ends up with the confused and mistaken conclusion that dualism does not truly provide an “out” from the threat of epiphenomenalism created by combining the premise of causal closure of the structurally depictable physical dimensions of reality with the premise of consciousness’ irreducibility to those physical structures.

Chalmers writes: “Imagine (with Eccles) that ‘psychons’ in the nonphysical mind push around physical processes in the brain, and that psychons are the seat of experience.” Already, Chalmers supposes that “psychons” are first and foremost structural entities which “push” in virtue of their structural properties—and happen to be “the seat of experience” only as a secondary coincidence. The picture Chalmers is getting—not unreasonably, given the unfortunate way Eccles chose to represent it—is that a “psychon” is really a structural sort of thing that has causal dispositions in virtue of its structure, with conscious properties somehow tagging along as an “extra” to the mechanical processes they physically engage in. Indeed, on this picture, a “dualism” of this kind would have no virtues whatsoever over materialism: why not tack those conscious properties onto material structures instead of whatever these extra ghostly structures are? But this is unequivocally the wrong way to think about interactionist dualism. Interactionism is not the idea that there is some special kind of “stuff” that does what it does in virtue of the structural kind of “stuff” that it is but which then just happens to have conscious experiences tacked onto it as a secondary coincidence—where it would have otherwise been the same basic kind of “stuff” had we taken it and had these secondary coincidental experiential properties removed. Interactionism is the idea that conscious experience itself is the “special stuff,” and that the “stuff” that is conscious experience itself interacts with the rest of the physical world in a distinct way that is nonetheless every bit as unanalyzable and basic as mechanical causation between purely structural entities themselves—remove the properties of experientiality from consciousness, and you don’t have a structural kind of “stuff” left minus a certain extra tacked on property—nothing is left at all because consciousness just is the phenomena of subjective experience. And while it appears that Eccles understands that this is not the way to think about it and he warns against taking his language as implying the suggestion that we should, his language is nonetheless incredibly hard to take as adding anything new to the picture except that very suggestion. The point that should be emphasized is that consciousness is not analyzable in terms of structure; and that it is consciousness itself—defined by the essential, irreducible properties of qualitative subjectivity and intentionality—which is causally relevant.

This is again exactly the core point addressed in the previous entry — “The Nature of Scientific “Explanation” and the Interaction “Problem”: Descartes, in addressing his opponents’ objections to idea that consciousness and the (rest of the) physical world could possibly interact, in principle, if they were different in anything like the way Descartes suggested they were, responded that the problem “arise[s] merely from [critics’] wishing to subject to the scrutiny of the imagination matters which, by their own nature, do not fall under it.” If consciousness itself is a “fundamental” entity unanalyzable in any other more fundamental terms, then conscious–physical interaction is just as ultimately unanalyzable in any but its own category—in principle—as physical–to–physical causation is at the bottom line. As James Moreland was also quoted as saying in that essay, “One can ask how turning the key starts a car because there is an intermediate electrical system between the key and the car’s running engine that is the means by which turning the key causes the engine to start. The “how” question is a request to describe that intermediate mechanism. But the interaction between [consciousness] and [the brain] may be … direct and immediate. [And if] there is no intervening mechanism, [then] a “how” question describing that mechanism does not even arise”—just as it would not arise—and could not be answerable even in principle—for us to ask something like “How does pushing the gas pedal cause it to move?” The relationship between pushing and being pushed is simply one of the most basic fundamental terms in which physical causation takes place—and this relationship is simply unanalyzable in terms of anything more basic than itself. Yet notice that when we speak of the “intervening mechanism[s]” of the electrical system structurally mediating the structural relationship between the key and the car’s running engine, every single one of our most ultimate terms will involve direct and immediate, unmediated, interactions at each step—each ingredient of our “explanation” of the intermediate steps of causation between the key and the car’s running engine will, individually, be as unanalyzable themselves as the question of “how pushing the gas pedal causes it to move.” At the core of every sort of “explanation” that we are actually capable of are terms which themselves simply cannot in principle be “explained.” The interactionist suggestion is therefore simply that mental–to–physical causation is at the bottom line simply one of the “bedrock and thus unexplainable” kinds of events that take place in the world, rather than one of the kinds which are “secondary and derivative from other bedrock terms and thus explainable through reduction to those other, ultimate terms”.

The interactionist dualist suggests that interaction between irreducible consciousness and physical structure is every bit as “basic” and direct and therefore unanalyzable in terms of anything more fundamental as the most basic ingredients of the terms of explanation of mechanical interactions between physical structures. Our inability to analyze or “explain” the nature of irreducibly conscious interaction is not like an inability to explain how a key causally connects to a car engine—it is analogous to our inability to analyze or “explain” the most basic and irreducible terms of physical causation itself, such as how the mass of an object of a given size and density causes the surrounding spacetime fabric to curve, or how an object’s velocity at a given moment causes its velocity in the very next. The only answer that is possible even in principle for these questions is to simply accept that the very terms themselves are primitive and irreducible. The dualist suggestion is thus that my irreducibly conscious intention to move my hand causes the neurological process which results in my hand’s movement in just as basic and unanalyzable a way as the other examples just given here—and this is how dualism escapes the threat of epiphenomenalism. Eccles, by speaking as if consciousness interacts with the brain through intermediating mechanisms that can be diagramatically visualized as physical structures—even if that is not what he meant—obscures the incredibly overwhelming significance of this basic and core fundamental point, and it is understandable why Chalmers ends up confused. In other words, Eccles rather inadvertently pushes the problem back a step: rather than consciousness–as–such interacting directly with dendrons, if this interaction is mediated through psychons that can be depicted in any structural sort of way, now we just push the issue back to consciousness–as–such interacting directly with psychons to achieve the effects which psychons are capable of producing on dendrons. Why include “psychons” in the middle of the picture at all, except to get around the fact that we can’t directly visualize mental–to–physical interaction in principle (which should be precisely the crucial point the dualist should have to emphasize in the first place)?!

Thus, Chalmers writes that “We can tell a story about the causal relations between psychons and physical processes, and a story about the causal dynamics among psychons, without ever invoking the fact that psychons have phenomenal properties. … It follows that the fact that psychons are the seat of experience plays no essential role in a causal explanation, and that even in this picture experience is explanatorily irrelevant.” No. We could only tell a story about “psychons” —and actually be describing “psychons”—if “psychons” were in fact performing their causal relations with physical processes in virtue of their structural properties. (And while I have objected above that that is the impression that Eccles indirectly went a long way to feed, it is not what he was actually trying to say).  But if the phenomenal properties themselves are causally active, then our so–called “story” simply is not a description of reality itself—and it is therefore no “explanation” in any meaningful sense of the word. Eccles’ unfortunate way of speaking implicitly lends itself to Chalmers’ faulty interpretation. Correcting it to throw out Eccles’ useless and misleading neologism, the entire point of the dualistic position is to say that consciousness is not a fundamentally structural phenomena which simply happens to possess “phenomenal properties” as if by accident—“consciousness” itself fundamentally is those very “properties.” Consciousness itself is just exactly the very basic phenomena of experientiality itself. Consciousness—experience itself—is what is playing the causal role, if dualism is right.

That avoids epiphenomenalism in a most obvious way that it is inexcusably purblind not to see. The premise that the structural and mathematically and spatially definable (e.g., “physical”) aspects of reality are “causally closed” combined with the premise that phenomenal and intentionalistic consciousness is neither identical to nor “composed of” mathematical and spatial structures specifically leads to epiphenomenalism by modus ponens. Furthermore, this poses an absolutely insuperable problem for Chalmers’ view of consciousness for a reason Chalmers himself explicitly acknowledges. And it is unbearably obvious, when looking at that reason, why eliminating the premise of causal closure of the “physical” eliminates the entailed conclusion of epiphenomenalism. Again, against Chalmers’ attempt to use a meaningless notion of “explanatory irrelevance” to escape the claim that his view faces the problem that it peculiarly winds up in epiphenomenalism, epiphenomenalism does specifically threaten Chalmers’ view in particular, because it follows by modus ponens from the combination of the conclusion that consciousness as such is irreducible to physical structure with the premise that interaction between physical structures is “causally closed.” And this problem ends up absolutely slicing the legs off of all of Chalmers’ further proposal from here. The only valid option that Chalmers (or anyone who follows his reasoning up to here) is left with is either to go back on all of the antireductionist arguments that got us here first place and become reductionists or eliminativists of some kind, or else drop “causal closure.”

In The Conscious Mind: In Search of a Fundamental Theory, Chalmers writes (pp.216–217): “[There are] constraints … in generating a theory of consciousness. The most obvious is the principle we rely on whenever we take someone’s verbal report as an indicator of their conscious experience: that people’s reports concerning their experiences by and large accurately reflect the contents of their experiences. … If the principle turned out to be entirely false, all bets would be off: in that case, the world would simply be an unreasonable place, and a theory of consciousness would be beyond us. In developing any sort of theory, we assume that the world is a reasonable place, where planets do not suddenly pop into existence with fossil records fully formed, and where complex laws are not jury–rigged to reproduce the predictions of simpler ones. Otherwise, anything goes.”

But now recall the central core of my argument against epiphenomenalism in entry (IV) of this serieswhich was precisely the fact that it would render us incapable of ever talking about the qualitative properties of consciousness as such, in principle!

Quote: “In Jaegwon’s words, what the principle states is that: “if we trace the causal ancestry of a physical event we need never go outside the physical domain.” What Jaegwon Kim realized was that if we combine this claim with the realization that subjective experience can’t be reduced to or accounted for in terms of physical mechanism,  then we end up with a description of reality known as epiphenomenalism, on which—roughly—experiences more or less dangle off the edges of the world before simply falling off (I’ll explain this more in a minute). Jaegwon’s description of the state of play was thus that the choices are to either claim that subjective experience can be reduced to physical description (which is what he had, by then, saw the same compelling reasons to reject which I am outlining here), reject the principle of causal closure, or else accept epiphenomenalism….

… One of the easiest ways to explain an epiphenomenalist relationship is by example. If you stand in front of a mirror and jump up and down, your reflection is an epiphenomena of your actual body. What this means is that yourbody’s jump is what causes your reflection to appear to jump—your body’s jump is what causes your real body to fall—and your body’s fall is what causes your reflection to fall. It may seem to be the case that your reflection’s apparent jump is what causes your reflection to appear to fall, but this is purely an illusion: your reflection doesn’t cause anything in this story; not even its own future states. …

If epiphenomenalism were true, no one would ever be able to write about it. In fact: no one would ever be able to write—nor think—about consciousness in general. No one would ever once in the history of universe have had a single thought about a single one of the questions posed by philosophy of mind. Not a single philosophical position on the nature of consciousness, epiphenomenalist or otherwise, would ever have been defined, believed, or defended by anyone. No one would even be able to think about the fact that conscious experiences exist.

And the reason for that, in retrospect, is quite plain to see: on epiphenomenalism, our thoughts are produced by our physical brains. But our physical brains, in and of themselves, are just machines—our conscious experiences exist, as it were in effect, within another realm, where they are blocked off from having any causal influence on anything whatsoever (even including the other mental states existing within their realm, because it is some physical state which determines every single one of those). But this means that our conscious experiences can never make any sort of causal contact with the brains which produce all our conscious thoughts in the first place. And thus, our brains would have absolutely no capacity to formulate any conception whatsoever of their existence—and since all conscious thoughts are created by brains, we would never experience any conscious thoughts about consciousness. For another diagram, if we represent causality with arrows, causal closure with parentheses, physical events with the letter P and experiences with the letter e, the world would look something like this:

… e1 ⇠ (((P⇆P))) ⇢ e2 …

Everything that happens within the physical world—illustrated by (((P⇆P)))—would be wholly and fully kept and contained within the physical world, where conscious experiences as such do not reside; the physical world is Thomas Huxley’s train which moves whether the whistle on top blows steam or not. And e1 and e2 float off of the physical world—for whatever reason—and then merely dissipate into nothingness like steam, with no capacity in principle for making any causal inroads back into the physical dimension of reality whatsoever. This follows straightforwardly as an inescapable conclusion of the very premises which epiphenomenalism defines itself by. But since the very brains which produce all our experienced thoughts are contained within (((P⇆P))), in order to have any experienced thought about conscious experience itself, these (per epiphenomenalism) would have to be the epiphenomenal byproducts of a brain state that is somehow reflective or indicative of conscious experience. But brain states, again because per epiphenomenalism they belong to the self–contained world inside (((P⇆P))) where no experiences as such exist, are absolutely incapable in principle of doing this.

To refer back to our original analogy whereby epiphenomenalism was described by the illustration of a person jumping up and down in front of a mirror, then: it would be as if the mirror our brains were jumping up and down in front of were shielded inside of a black hole in a hidden dimension we couldn’t see. Our real bodies [by analogy, our physical brains] would never be able to see anything happening inside that mirror. And therefore, they would never be able to think about it or talk about it. And therefore, we would never see our reflections [by analogy, our consciously experienced minds] thinking or talking about the existence of reflections, because our reflections could only do that if our real bodies were doing that, and there would be absolutely no way in principle that our real bodies ever could.

The fact that we do this, then—the fact that we do think about consciousness as such, and the fact that we write volumes and volumes and volumes and volumes philosophizing about it, and the very fact that we produce theories (including epiphenomenalism itself) about its relation to the physical world in the first place—proves absolutely that whatever the mechanism may be, conscious experiences somehow most absolutely do in fact have causal influence over the world. What we have here is a rare example of a refutation that proceeds solely from the premises of the position itself, and demonstrates an internal inconsistency.

But Jaegwon Kim has identified the possible options for us: either experiences and physical events are just literally identical (which even Kim himself rejects, for good reasons we have outlined here), or else epiphenomenalism is true (which Jaegwon Kim accepts, but which the simple argument outlined just now renders completely inadmissible)—or else the postulate of the causal closure of the physical domain is false—and conscious experience is both irreducible to and incapable of being explained in terms of blind physical mechanisms, and possesses unique causal efficacy over reality all in its own right.”

It should be too obvious to need stating that supposing that consciousness itself is a causally active phenomena in the world irreducible to any others avoids the conclusion that consciousness is causally inactive (which is exactly all that is meant by the term “epiphenomenalism”). Yet, Chalmers wants to retain the principle of causal closure: “[On] the dualist view I advocate … causal closure of the physical is preserved; physics, chemistry, neuroscience, and cognitive science can proceed as usual. In their own domains, the physical sciences are entirely successful. They explain physical phenomena admirably; they simply fail to explain conscious experience.” In other words, explanations of what happens in the world—according to Chalmers—are absolutely complete without any reference to conscious experience. This can only be true if consciousness is causally inactive and therefore explanatorily irrelevant.

To avoid this implication, Chalmers wants to say that epiphenomenalism isn’t a special problem for his view in particular because the dualist faces it too. But to defend this suggestion, he says something absolutely witless: that consciousness is “explanatorily irrelevant” on the dualist’s account as well, because we can make up a story of things that doesn’t make reference to consciousness. But the difference is whether that story is correct, or not. On the dualist’s account, any such account will be false because on the dualist’s account, consciousness–as–such is a phenomena that does play a direct role in “what happens”. But on Chalmers’ account, the point is that the account that makes no reference to consciousness will be true. And that is the only difference that means anything. If by “explanation” we mean an account that actually explains why things actually do what they actually do, then if consciousness is—per interactionism—causally relevant, then it is not “explanatorily relevant”—whereas, for Chalmers, so long as he continues to holds on (as I contend he shouldn’t) to the principle of causal closure of the physical and insists (as I contend he should) that consciousness is irreducible to any explanation in terms of physical structure, epiphenomenalism follows by strict modus ponens. 

Epiphenomenalism is an absolute non–starter—not only for the independent reason explained above (that it is refuted by the fact that we demonstrably do think and talk about consciousness–as–such), but specifically against Chalmers’ own view for a specific reason he identifies: “The most obvious” assumption needed for Chalmers’ proposal for a theory of consciousness to be even slightly imaginable “is the principle we rely on whenever we take someone’s verbal report as an indicator of their conscious experience: that people’s reports concerning their experiences by and large accurately reflect the contents of their experiences.” If epiphenomenalism is true, then no one is ever capable of giving a single accurate report of their conscious experiences as such. Chalmers is committed to epiphenomenalism so long as he commits to both the irreducibility of consciousness to the “physical” properties of reality, and to the premise that those “physical” properties are “causally closed” with respect to each other. And interactionism is the way out, unless Chalmers wants to take back his arguments (which I consider resoundingly successful) against “reductive” accounts of conscious experience which try to “identify” it (one way or another) with something other than conscious experience itself—but these are the very arguments he has made his name by (and again, I whole–heartedly endorse them).

In Chapter 5 of The Conscious Mind, Chalmers actually takes these issues on directly. There are, he writes, four relevant premises to consider: “1. Conscious experience exists. 2. Conscious experience is not logically supervenient on the physical (e.g. “identical to” or “emergent from” “physical” facts about spatial–structural relationships). 3. If there are phenomena that are not logically supervenient on the physical facts, then materialism is false. 4. The physical domain is causally closed.” He explains that his own view results from the acceptance of all four premises. Again, we have already seen that this is the combination of premises which result in epiphenomenalism by modus ponens. However, in this section, he presents a subtly more developed argument for why the dualist fares no better than the materialist. Again, he repeats himself: “The deepest reason to reject [interactionist dualism as an approach to resolving the conflict between causal closure of the physical and the irreducibility of consciousness to the physical] is that [it] ultimately suffer[s] from the same problem as a more standard physics: the phenomenal component can be coherently subtracted from the causal component. On the interactionist view, we have seen that even if the nonphysical entities have a phenomenal aspect, we can coherently imagine subtracting the phenomenal component, leaving a purely causal/dynamic story characterizing the interaction and behavior of the relevant entities.” Of course, we have also seen why this is wrong: the ability to create “a story” doesn’t entail that that story would actually be correct. 

This time, however, Chalmers addresses this type of response, and proposes a counter–argument (which I have partially addressed already, but will now proceed to do so in more detail): “Various moves can be made in reply, but each of these moves can also be made on the standard physical story. For example, perhaps the abstract dynamics misses the fact that the nonphysical stuff in the interactionist story is intrinsically phenomenal, so that phenomenal properties are deeply involved in the causal network. But equally, perhaps the abstract dynamics of physics misses the fact that its basic entities are intrinsically phenomenal (physics characterizes them only extrinsically, after all), and the upshot would be the same. Either way, we have the same kind of explanatory irrelevance of the intrinsic phenomenal properties to the causal/dynamic story. The move to interactionism … therefore does not solve any problems inherent in the property dualism I advocate.” This would be a viable argument—if panprotopsychism were otherwise a viable choice. But it isn’t—for entirely different reasons altogether.

  _______ ~.::[༒]::.~ _______

5.

In Chalmers’ exploration of the options, after rejecting the reductionist approaches altogether (with arguments I wholeheartedly endorse), he reaches the conclusion that “the best options for a nonreductionist are type–D dualism, type–E dualism, or type–F monism: that is, interactionism, epiphenomenalism, or panprotopsychism.”

We’ve seen already why epiphenomenalism must, necessarily, fail. (I’ll be deducing additional reasons for this failure using Chalmers’ own premises against each other shortly—and proceeding from there to apply the same argument to an even more radical and significant conclusion.) Panprotopsychism, on the other hand, is the idea that consciousness as we experience it in the human mind emerges not from the geometric–structural and spatial–relational mechanically–causally disposing properties of physical entities, but rather from some other intrinsic properties of these entities. Now, the “pan” in “panprotopsychism” means “everywhere.” The “psychism” means “consciousness.” So “panpsychism” means “consciousness everywhere.” But “proto” means something like “precursor.” Thus, there is a distinction in theory between panpsychism and panprotopsychism. Where panpsychism says that consciousness itself must be everywhere, panprotopsychism says that merely the “precursors” of consciousness must be everywhere.

But what are the options for what these “precursor” properties might be? To return yet again to my essay (IV) from this series, “Now, the plainest thing in the world to see is that the question of whether something is an experience or not is absolutely binary: the answer is either “yes” or “no,” and there are absolutely no steps in–between the two. The question of when a pile of sand goes from being a “heap” of sand to becoming a “mountain,” for example, is one that has rough edges: at exactly which point in the process of removing singular grains of sand from a “mountain” has it devolved into a “heap?” At exactly which point in the process of adding singular grains of sand to a “heap” does it become a “mountain?” Reasonable people could disagree, and there is no objective way to determine the answer. Some questions are like this: the question of when a new “species” has evolved has rough edges, and evolution can address the transition from one species to another through the small, gradual steps that are involved without needing to bridge any fundamental gap of absolute difference between an original “species” and a second. But the question of conscious experience is not like this—the difference between something being a subjective experience and something not being a subjective experience is as absolute as absolute can get. There may be various degrees of complexity or sensitivity or detail between experiences, but either something is an experience or it isn’t.

There is no middle ground between the two—but this also means there is no ground that can be covered in any gradual steps as a means of bridging the gaps between the two. And there is, therefore, no way to proceed gradually in steps from non–experience to experience. The move from non–experience to experience, if it happens, could only happen as an extraordinary leap across galaxies which happens all in one sudden and dramatic inexplicable move. Leibniz first and most clearly described the problem inherent in this on the record in 1714: “It must be confessed, moreover, that perception, and that which depends on it, are inexplicable by mechanical causes, that is, by figures and motions, And, supposing that there were a mechanism so constructed as to think, feel and have perception, we might enter it as into a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception.” But the “pieces which push one another” that describe Leibniz’ mill are just exactly what describe the essence of the physical entities accepted as the (and the only) basic building blocks of the universe by physicalists—and gradual, almost imperceptible additions of singular (and mechanical) grains of sand at a time are exactly the way evolutionary accounts perform their explanatory work (and the only way that they can).”

The only option for what these “intrinsic properties” might consist of is therefore consciousness itself. Thus, there is no valid way to formulate any working theory of what “panprotopsychism” could possibly mean except for it to boil straight down into fullblown panpsychism where conscious experiences which are actually, literally experienced actually, literally exist everywhere. Yet, if we take the panpsychist approach to consciousness, then as I explain in my entry (VII) to this series, we simply end up running the same process of reasoning against the relationship between the subjective mental properties and the objective structural properties of the microphysical entities proposed to have conscious experiences on panpsychism that we ran against the relationship between the subjective mental properties of the human mind and the objective structural properties of the human brain to arrive at panpsychism as a potential solution to begin with: and this time panpsychism doesn’t exist as a way out to the problems—only dualism does.

As I explain in entry (VII), the panpsychist cannot say that the subjective mental properties of microphysical entities are “identical to” the objective structural properties of the microphysical entities, for reasons the panpsychist himself already accepts. But even if he did, he would precisely undermine his own reasons for rejecting the ordinary mind–brain “identity theory” in the first place—and the panpsychist suggests panpsychism precisely as a solution to that failure. So the ultimate nature of everything must be either physical or mental (with the other standing in some extraneous relationship to the first). But the panpsychist cannot say that everything at root is mental—because this simply recreates the “Hard Problem of Consciousness” in reverse: namely, this would now leave us having to ask: how do we get physical properties from the mental? And this is every bit as incoherent as trying to imagine how we could get the subjectively experienced qualitative taste of strawberries from blind senseless particles bouncing around in some complex combination. But even if the panpsychist bites this bullet, then once again he would lose any reason to propose panpsychism as a solution to the materialist Hard Problem of Consciousness in the first place. Finally, the panpsychist can adopt the kind of panpsychism which Chalmers flirts with and suppose that the world is composed of universally “physical” entities which possess “mental” properties “on the side,” but this leads to epiphenomenalism. And it turns out that there is no way to be a panpsychist without committing to a premise that one would precisely need to reject in order not to be a reductionist of some kind in the first place. Hence, “panprotopsychism” is out too.

Of Chalmers’ three most promising options—dualism, epiphenomenalism, and panprotopsychism—only dualism is left. And interactionism actually does avoid the problems that arise on panpsychism (the only possibly viable physicalist competitor to interactionism, and the position which his defender of “the standard physical story” necessarily ends up in given the lack of sensible steps between the strict, sharp binary line between something’s either being an experience of some kind or not): it has no need to incoherently try to derive the physical–structural from the qualitative–experiential (as the idealist panpsychist would). It has no need to incoherently try to suppose that the objective/structural and subjective/experiential properties of reality are literally identical with each other (as either the “identity theorist” or the “identity theorist”–panpsychist would). And it does not modus ponens itself into epiphenomenalism—as the “property dualist” and “property dualist”–panpsychist do. Panpsychism still ends up demonstrating why the objective structural and subjective experiential properties of reality (a) cannot be supposed to be either identical to or composed of the other and (b) must necessarily interact in a direct, unmediated, and “basic” (although mysterious) way. So Chalmers’ attempt at a counter–argument to the point that interactionist dualism actually would avoid epiphenomenalism—in a way that someone sticking to “the standard physical story” can’t—because dualism would hold no virtues over panpsychism—fails, because there are entirely other reasons aside for panpsychism’s failure (although they turn out surprisingly upon reflection to be just the same exact reasons why the materialist perspectives which panpsychism proposed itself as a solution to failed in the first place). Interactionism remains the only way out of this otherwise inescapable loophole of logical quagmire, and we have arrived at it by what I previously, in an early entry called “a logical, rational, piecemeal divide–and–conquer process of elimination.” 

And with how all these other arguments interrelate established, I can move on to the more important point of this entry. First, I will adapt the points made in the first section of this entry to show yet another reason why Chalmers’ particular way of trying to make dualism “naturalistic” fails—and yet another reason why interactionist dualism solves the problems inherent in Chalmers’ attempt at an account. Then, I will apply those same points to a slightly separate issue and draw a much more radical and surprising conclusion from relatively simple premises.

_______ ~.::[༒]::.~ _______

6.

Chalmers attempts to make his dualism “naturalistic” by talking about “laws.” On Chalmers’ account, “psychophysical” laws exist in the same way that “physical laws” do. The conclusion of arguments establishing that consciousness cannot be reduced to unconscious processes is that consciousness itself is a fundamental bedrock ingredient of the Universe. Up to this point, I agree—but there is a deep and significant problem with how Chalmers tries to characterize this.

Consciousness, as both Chalmers and I agree, is fundamental. What Chalmers says next is that: “Where we have new fundamental properties, we also have new fundamental laws. Here the fundamental laws will be psychophysical laws, specifying how phenomenal properties depend on physical properties. These laws will not interfere with physical laws; physical laws already form a closed system. Instead, they will be supervenience laws, telling us how experience arises from physical processes. We have seen that the dependence of experience on the physical cannot be derived from physical laws, so any final theory must include laws of this variety….” We’ve already addressed the fact that supposing that these laws “will not interfere with physical laws” locks consciousness–as–such out of the causal nexus—and any premise which entails this result simply refutes itself. But that is not the issue I am concerned with here—the issue I am concerned with here revolves instead around how we are supposed to characterize these “laws”.

Before getting into that, however, let’s take a closer look at what Chalmers says about them in The Conscious Mind. Describing the epistemological process by which we come to understand the world, he writes: “At first, I have only facts about my conscious experience. From here, I infer facts about middle–sized objects in the world, and eventually microphysical facts. From regularities in these facts, I infer physical laws, and therefore further physical facts. From regularities between my conscious experience and physical facts, I infer psychophysical laws, and therefore facts about conscious experience in others. I seem to have taken the abductive process as far as it can go, so I hypothesize: that’s all.” I find no problem in this paragraph. “At first, I have only facts about my conscious experience”—absolutely. And the microphysical facts supposed by the materialist to constitute everything are an inference from this primary fact, not something ever primarily known—“From here, I infer facts about … objects … and eventually microphysical facts…”—absolutely. But is inferring from physical regularities to the existence of “physical laws” the same thing as inferring from regularities in conscious experiences to the existence of “psychophysical laws?” That depends—as we will soon see—on what it means to call something a “law.” It is time to recall our earlier discussion in section 1.

Chalmers’ goal is to prove that “Even if consciousness cannot be reductively explained, there can still be a theory of consciousness. We simply need to move to a nonreductive theory instead.” His unique idea is that “We can give up on the project of trying to explain the existence of consciousness wholly in terms of something more basic, and instead admit it as fundamental, giving an account of how it relates to everything else in the world.” And this project will be “naturalistic” because “Such a theory will be similar in kind to the theories that physics gives us of matter, of motion, or of space and time.” How so? Because “Physical theories do not derive the existence of these features from anything more basic, but they still give substantial, detailed accounts of these features and of how they interrelate, with the result that we have satisfying explanations of many specific phenomena involving mass, space, and time.” So far, so good—and this description so far applies to the approach I have defended in this series. But the following sentence is where the central problem begins to show itself: Chalmers believes that ”they do this by giving a simple, powerful set of laws involving the various features, from which all sorts of specific phenomena follow as a consequence.”

Thus, “By analogy, the cornerstone of a theory of consciousness will be a set of psychophysical laws governing the relationship between consciousness and physical systems. … Given the physical facts about a system, such laws will enable us to infer what sort of conscious experience will be associated with the system, if any. These laws will be on a par with the laws of physics as part of the basic furniture of the universe. It follows that while this theory will not explain the existence of consciousness in the sense of telling us ”why consciousness exists,” it will be able to explain specific instances of consciousness, in terms of the underlying physical structure and the psychophysical laws. Again, this is analogous to explanation in physics, which accounts for why specific instances of matter or motion have the character they do by invoking general underlying principles in combination with certain local properties. … There need be nothing especially supernatural about these laws. They are part of the basic furniture of nature, just as the laws of physics are. There will be something “brute” about them, it is true. At some level, the laws will have to be taken as true and not further explained. But the same holds in physics: the ultimate laws of nature will always at some point seem arbitrary. It is this that makes them laws of nature rather than laws of logic.” Finally, “ … For a final theory, we need a set of psychophysical laws analogous to fundamental laws in physics. These fundamental (or basic) laws will be cast at a level connecting basic properties of experience with simple features of the physical world. … When combined with the physical facts about a system, they should enable us to perfectly predict the phenomenal facts about the system. … Once we have a set of fundamental physical and psychophysical laws, we may in a certain sense understand the basic structure of the universe.”

It should be obvious how closely many of these statements correspond to what I have said in my own recent discussion of the “conceptual” interaction problem in entry (X) of this series—but I think there is something tremendously significant that Chalmers has handled here in a very poor and unclear way. This brings us now to the central core of the question Chalmers has handled inadequately: would “psychophysical laws” truly be analogous to the physical laws whose existence we all accept? That depends on how we characterize what it means for something to be a “law”—on what it means fora law” to be “part of the basic furniture of nature.” In what sense do we accept that these other “laws” exist to begin with? Note what Chalmers says: “[Psychophysical laws] are part of the basic furniture of nature, just as the laws of physics are.” Is he right? The statement that physicallaws” are part of the basic furniture of nature” is not even explicitly stated here, but it is an absolutely crucial assumption underlying the analogy. What does it mean to say that “physical laws” are part of the basic furniture of nature? 

Recall what we saw in the first section of this entry: theists are well–known for making the argument that if certain kinds of “laws” exist, there would have to be a “lawmaker”—God—to account for their existence. The “vital distinction” is between “a causal law, i.e. one that rules the genesis of events, and an empirical law, one that merely registers their occurrence.” The theist argues that the “laws” of nature are more than our mere recordings of what things do—that they somehow instead “rule the genesis of events” in their own right.  And what is the atheist response? The atheist doesn’t deny that “laws” of this sort would entail a “lawmaker.” The atheist rather denies that “laws” of this sort are the kinds of “laws” that our world actually has—and says that when we derive “the laws of nature,” we aren’t discovering some independent thing called a “law” that is actually “governing” the behavior of the universe; we are, instead, merely creating descriptions of what things in the world do in and of themselves, in virtue of their own intrinsic traits. So as we saw in, for example, Bede Rundle’s critique of the argument, ““the law” is in no sense instrumental in bringing about accord with it. … What would God have to do to ensure that atoms, say, behave the way they do? Simply create them to be as they in fact are.  Atoms having just those features which we currently appeal to in explaining the relevant behaviour, it does not in addition require that God formulate a law prescribing that behaviour.” But contrast this with how Chalmers himself characterizes the nature of his “psychophysical laws!”  “We can use Kripke’s image here [to illustrate what the situation is like, if dualism is true]. When God created the world, after ensuring that the physical facts held, he had more work to do! He had to ensure that facts about consciousness held. The possibility of zombie worlds or inverted worlds shows that he had a choice. The world might have lacked experience, or it might have contained different experiences, even if all the physical facts had been the same. To ensure that the facts about consciousness are as they are, further features had to be included in the world.”

If we accept Chalmers’ arguments that consciousness is irreducible to anything other than itself and is therefore fundamental in its own right, then we are precisely postulating the existence of Chalmersian “psychophysical laws” because we need the “law” to do something that the actual things in the world do not inherently do in and of themselves by virtue of their own intrinsic traits—but it is exactly for this very reason that the existence of any such “law” is necessarily impossible! Not, at least, so long as we reject the “panprotopsychist” approach, in which the emergence of consciousness would be explained by the “intrinsic properties” of “physical” objects rather than their structural/relational properties, in which case the “laws” could be construed as describing what these actual properties of actual entities do in and of themselves—but I have explained why I think that option likewise absolutely fails in 5.

These are the kinds of “laws” that would exist as “causal laws … [which] rule the genesis of events…” and would therefore require an active power behind the physical world itself which “formulates such laws and ensures that the physical realm conforms to them….” Notice what Chalmers says in discussing “[the] worry … about how consciousness might have evolved on a dualist framework….” The worry, he writes, is that “a new element pop[s] into nature, as if by magic….” However, “ … this is not a problem.” Why? Because “Like the fundamental laws of physics, psychophysical laws are eternal, having existed since the beginning of time.” And he wraps his explanation up: “It may be that in the early stages of the universe there was nothing that satisfied the physical antecedents of the laws, and so no consciousness … In any case, as the universe developed, it came about that certain physical systems evolved that satisfied the relevant conditions. When these systems came into existence, conscious experience automatically accompanied them by virtue of the laws in question. Given that psychophysical laws exist and are timeless, as naturalistic dualism holds, the evolution of consciousness poses no special problem.”

But “the fundamental laws of physics” are not “eternal”—and this is not a way that a naturalist is epistemically allowed to think about the nature of “laws!” The “law of gravity” “exists” in the absolutely derivative sense in which “it” can be said to “exist” only just so long as the actual thing we call “space–time” exists with the actual properties residing within that actual thing by virtue of which it demonstrates the patterns of behavior which we label “the law of gravity.” But there is no “law of gravity” whenever a spacetime with those actual properties is not around—cue our quotation from Richard Carrier earlier;  “The “law” of gravity … ‘is’ in every place and time where the physical conditions that manifest gravity exist.”—the “law” of gravity is not an actual thing, but merely our labels—applied after the factto the actual thing. So it does not “exist” when the actual thing which intrinsically causes those behaviors to manifest does not exist. And if we suppose that it does, we have to face the argument that a “law” of this sort could only be imposed upon the universe from outside. So likewise, “psychophysical laws” in the purely descriptive–recording sense (the only sense in which the naturalist can accept that any kinds of “laws” exist at all) cannot exist so long as consciousness is not already around for us to describe. Not unless these are the kinds of “laws” which are fundamentally and categorically unlike the other “laws” of nature in that they represent a “governing power” all in their own right!

To explore this, for a moment, even further, I quote from John Foster’s “Regularities, Laws of Nature, and the Existence of God.” Foster writes of a characterization of what it means for something to be a “law” which arguable does not apply to ordinary physical phenomena in general. But where Foster writes of the law of gravitation (which could be construed as holding in virtue of the intrinsic properties which the actually existing structure of the actually existing spacetime fabric in our world actually has), I will substitute talk of psychophysical laws—and it will be clear that Foster’s theistic construal of the nature of laws which arguably does not apply to “laws” of nature in general does clearly apply to Chalmers’ formulation of the psychophysical laws: “A [psychophysical] law of nature is a fact of natural necessity—the necessity of [psychophysical relationships] being regular in a certain way. But in exactly what sense does the relevant regularity count as necessary? Well, the first thing that needs to be stressed is that the necessity involved is not a form of strict or absolute necessity. The claim that it is a [psychophysical] law of nature that … [say, the chemical properties of opiates bring about the subjective experience of qualitative “happiness”] … does not imply the absolute impossibility of cases in which this regularity fails: it does not imply that there are no possible situations, of any kind [e.g. in Chalmers’ “possible worlds”], in which [the “psychophysical law”–like relationship between consciousness and physical states of the brain]  does not behave this way… the [psychophysical] laws of nature (assuming they exist) are themselves only contingent. The law [which speciefis that conscious experiences appear under x physical conditions] holds, let us assume, in the actual world. But we can certainly envisage worlds in which it does not; and, in being able to envisage such worlds, we can also envisage worlds in which, in the absence of this law, the associated regularity does not obtain—worlds in which there are the same intrinsic types of matter as those which feature in the actual world, but in which [conscious experiences do not appear under these physical conditions]. … Being only contingent, [psychophysical] laws of nature are not forms of strict necessity. So in what sense are they forms of necessity at all? We want to say that [consciousness appears under x physical conditions because it has] to. But how are we to construe its having to if there are [possible worlds] where … [it doesn’t]?”

Foster writes here that giving a ‘naturalistic’ account of such “laws” is problematic because they are contingent—they “could have” been different. For a case like gravity, this argument can be undermined by saying that once the intrinsic nature of matter as it actually is in our actual world is what it actually is, and once the intrinsic nature of the fabric of spacetime in our actual world is what it actually is, there is nothing for a “law” to add to the picture. Therefore, no constraint is imposed on the world from outside—the “law” of gravity is fully explained by features contained fully within the actual entities within the actual world—and can therefore be wholly accounted for by giving an explanation of how matter and spacetime came to possess the actual intrinsic properties which they actually do within the world, with no additional explanation of how this “law” is imposed upon that world from without necessary. But the kinds of “laws” which Chalmers proposes here would require additional explanation after all the actual properties of all the actual things in the actual world have been described (and that is exactly the reason why Chalmers thinks he needs to posit them!) It is unbelievably ironic, in this context, that Chalmers borrows the vocabulary from Kripke of speaking about God having more work to do He writes, for instance: “In general, if B–properties [first–person subjective facts about qualitative experiences] are merely naturally supervenient on A–properties [third–person objective facts about physical states] in our world, then there could have been a world in which our A–facts held without the B–facts. As we saw before, once God fixed all the [facts about physics], in order to fix the [facts about qualitative conscious experiences] he had more work to do.” Indeed, these actually are exactly the kinds of “laws” that could only be “fixed” by an outside force demanding that the universe behave in this way and not that despite the entities already existing within the universe simply doing so in virtue of their own intrinsic traits—exactly the kind which have always led to the inference to a “lawmaker”—God—in traditional theistic thought.

_______ ~.::[༒]::.~ _______

7.

But suppose you accept everything said up to here. And suppose, to try to come to terms with this scenario and remain as “naturalistic” as possible, you come to accept some kind of multiverse scenario on which a mechanism is responsible for bringing the contingent laws of nature about (or perhaps only the “psychophysical” laws since “laws of nature” in this sense still don’t exist at all)—with the particular form these laws end up taking being explained in terms of the details of whatever this mechanism is. Thus, we can have “laws” imposed on the world from outside—but this “outside” is, itself, another ultimately “physical” blind mechanism. Without considering the whole further extensive background of issues underlying this approach (even at 13,000+ words, I am striving for as much brevity as possible to get to the point!), allow me to assume for the sake of argument here that such an approach is otherwise plausible. The problem then would simply become pushed back yet another step—all the way to the origins of the universe itself.

Suppose we program a computer to blindly and randomly generate mathematical functions. With x’s and y’s and numerical values as the available ingredients on the left hand of the equations, we’ll get plenty of x’s and y’s and numerical values as “outputs” on the right hand of the equations. But once again, no equation that results from this kind of mechanistic process will ever produce anything like “x• y – 9(x + y) = {the subjective sensation of qualitative blue}”. How could it? The ingredients simply aren’t there in blind mechanism for specifying anything other than more blind mechanisms—and this is exactly the starting point which the “naturalistic dualist” took for his starting position to begin with, so it’s not a premise he can consistently just up and suddenly do away with here.

Per the “naturalistic dualist’s own starting premises,” this still can’t be a way out. Whatever mechanism of quantum fluctuations at the core of the multiverses generates universes, per the very premises the “naturalistic dualist” started out by accepting it would have to be capable of knowing what consciousness is in order to specify consciousness within its mechanisms—in other words, it would necessarily have to be conscious (if the anti–reductionist arguments succeed, then you can’t even infer the existence of experiential facts from any number of “physical” facts)—it would have to be more like God than a blind, unconscious quantum Singularity. Hence, even if we could accept a naturalism on which “laws” are truly “ordering powers” in their own right, imposed upon the Universe from without (in some quantum void at the center of the multiverse, say), a blind mechanism for the generation of these “laws” still couldn’t account for a “law” which specifies a relationship between otherwise blind mechanisms and something that is neither identical to nor composed of blind mechanisms at all. To make dualism properly “naturalistic” will require some other approach.

The naturalist can’t have “laws” of this sort at all; and yet, even if he could, he still couldn’t account for how the laws would specify consciousness in particular within their equations if the “lawmakers” were blind mechanisms lacking consciousness—for exactly the same reasons the anti–reductionist arguments pushed him to hypothesize the existence of “psychophysical laws” in the first place. Only God could account for the existence of “laws” of this kind of sort in the first place; but even if some sort of multiverse–generator could account for them, the “multiverse–generator” would still once again have to be something more like God than a mechanism to account for how “laws” of this general kind could possibly specify consciousness within their equations specifically.

_______ ~.::[༒]::.~ _______

8.

There is however one incredibly ironic but simple approach which I think obviously works.

And that is to suppose that “psychophysical laws” are “empirical laws” which merely consist of us “registering the occurrence” of actual events in the world between actual entities, and not “causal laws” which “rule the genesis of events” and are actually causally “instrumental in bringing about accord with themselves.” How?

By supposing a substantialist view by which rather than the psychophysical law bringing it about that under x physical conditions, y phenomenal property is contributed to a conscious experience, “the psychophysical laws” are just our descriptions of what things do in and of themselves in virtue of their own intrinsic properties and traits—which necessarily has to entail that consciousness itself is already a fundamental thing within the world and the so–called “psychophysical laws” are not actual  “things” in the furniture of the world—consciousness itself is; the so–called “laws” are just our after–the–fact descriptions of what we observe of its behaviors in relation to the world.

Again, Bede Rundle: “Newton’s laws can describe the motion of a billiard ball, but it is the cue wielded by the billiard player that sets the ball moving, not the laws. The laws help us map the trajectory of the ball’s movement in the future (provided nothing external interferes), but they are powerless to move the ball, let alone bring it into existence.” Likewise, psychophysical laws can describe how consciousness (the subjective stream of intentionalistic thought and qualitative experiences) relates to the rest of the world, but these “laws” are merely our descriptions of consciousness itself—just as the “law” of gravity is merely our description of how the fabric of space–time itself actually behaves, not in virtue of ‘orders’ from some external law, but in virtue of the intrinsic, tangibly existent properties of the actually existent fabric of space–time itself. This clarification of the nature of “laws” can—ironically—save us from the entailment of inescapably theistic conclusions in the case of consciousness just as much as it can in the case of something like gravity. The only reasonable position can be that the “laws” are our description of what the thing which we call “consciousness” does—rather than the reverse, implied by Chalmers’ equivocations around the idea of “law,” that “consciousness” is our label for what the laws” somehow force to happen.

_______ ~.::[༒]::.~ _______

9.

At least, that could work for what we might call the “day to day” operations of consciousness. Once a given stream of consciousness is already in place, we can say that a “law” is merely our description after the fact of how “the conscious self and its brain” themselves interact in virtue of what they themselves intrinsically are. But this brings us back to why the problem Chalmers responded to earlier—“how [might] consciousness [have] evolved on a dualist framework…[?]”—was problematic to begin with: “did a new element suddenly pop into nature, as if by magic?” The problem should now be somewhat more obvious: if the Hard Problem is real, then consciousness is neither “identical to” nor “composed of” (a relationship which could also be labeled with phrases like “emergent from” or “reducible to” which appear to describe a different kind of relationship, but do not; and merely label the same relationship somewhat differently) mechanical procedures which lack consciousness. If that is so, then nothing about the evolution of these mechanical procedures themselves can account for why the entire stream of consciousness suddenly “pops” into being as a “new element … as if by magic.” Nothing happens, in any physical terms, during the series of blind mechanical events between a blind, unconscious sperm making contact with a blind, unconscious egg and carrying out a mechanical procedure by which DNA mechanically directs the program of building a physical input–output system which could possibly explain why subjective streams of qualitative experience suddenly appear—if Chalmers is correct up to here (and I think he is, and defend that position elsewhere), nothing about the mechanical programming of DNA as a structural, physical entity can explain why that mechanism in virtue of its mechanistic properties brings about a conscious being instead of a human zombie who is just as unconscious as the original sperm and egg, or the microparticles making these up, themselves—and that’s exactly why we thought we ought to consider postulating a new type of extra, additional “laws” to account for why consciousness does appear to begin with. Again, this follows from precisely the same anti–reductionist arguments which we took as our very starting point in the first place: just as the blind mechanism of particles coursing through space cannot be literally identical to or constitutively compose a first–person–subjective qualitative experience, so it follows from this very same exact starting premise that they cannot possibly be said to bring the entire stream of  first–person–subjective qualitative experiences into being in virtue of what they intrinsically are, in and of themselves—not unless, contra the core starting point of Chalmers’ entire position, conscious experiences and intentionality can be given a “deflationary” explanation and there is no “Hard Problem” after all.

But any “law” of the kind necessary to bring about subjective streams of qualitative experience (and intentionality) in this situation would, once again, be equivalent to a law that says “every time the 8 ball falls into the left corner pocket, an orange angel is born in a newly created Heaven”—it would propose to “connect” events which by all admission simply have no intrinsic connection in virtue of what the entities related by this “connection” actually are in and of themselves—by the very same exact premises that got us here in the first place. Recall Bede Rundle once again: “Newton’s laws can describe the motion of a billiard ball, but it is the cue wielded by the billiard player that sets the ball moving, not the laws. The laws help us map the trajectory of the ball’s movement in the future (provided nothing external interferes), but they are powerless to move the ball, let alone bring it into existence.” Or, if it does, then that can’t be the kind of “law” whose existence the “Naturalist’s” worldview can with any plausibility allow him to accept.

So it goes, likewise, for consciousness: we couldn’t possibly have that sort of “law” exist in such a way that it itself, as a “law,” as an actual thingruling the genesis of events” as an “operative power” in its own right, could suddenly bring streams of consciousness into existence under particular conditions (which in virtue of what they are in and of themselves have no intrinsic power to do such a thing at all (where this is precisely the reason we saw the need to posit this kind of “law” to begin with)), unless a conscious “lawmaker” who “formulates such laws and ensures that the [world] conforms to them” were our explanation for the origins of such a “law.”

However — recall the atheist response to the Kalām cosmological argument which I endorsed as by far the most plausible: what was our most effective option for avoiding the argument that an explanation for the origins of the physical universe would require God? We did it by denying that it is necessary for time to have had an origin—by denying that either cosmological science or philosophical paradoxes render the idea of an infinite past—whose existence we, notably, cannot “empirically” confirm!—invalid or implausible. The same approach is available here.

The same approach which the atheist plausibly takes in the context of the cosmological argument—making the inference to the empirically inconfirmable conclusion that the Universe’s temporal past (defining “Universe” in such a way that it may include much more than our four–dimensional region of it which appears to have formed at the moment of the Big Bang) must be eternal—can be used here to avoid the kinds of “laws” that only God could account for as postulates to account for the coming–into–existence of uniquely individual streams of experience.

What would the approach entail in this context? It would entail that when we speak of the “psychophysical laws,” in particular, which specify the conditions in which a unique stream of consciousness comes to beeven in these cases we still are simply creating descriptions and labels after the fact about how consciousness itself—in virtue of the actually existing traits of the actually existing phenomena of consciousness itself—inherently behaves.

In other words, it would have to follow that we are not identifying a “law” which, by its own “ruling” power, brings that conscious stream into being—but for any actually–existing consciousness to interact with the rest of the physical universe at the moment at which a given physical organism and stream of consciousness begin the process of interaction—the stream of consciousness itself would have to already pre–exist as an already actually–existent phenomena in order that it could be a thing whose behavior our “laws”—which are not antecedent powers but merely consequent descriptions of phenomena themselves—consequently describe. And notice, too, that the same implication would follow with respect to any “psychophysical law” which described the cessation of a stream of consciousness upon biological death of the brain: nothing in these physical events themselves could possibly intrinsically account for the cessation of that stream, any more than the 8 ball falling into the right corner pocket could intrinsically account for a blue angel in an alternate dimension dying. Thus, we could not possibly have the kind of law which would specify that the stream of consciousness must in fact cease at death, either—not without a “lawmaking” God.

If: •(1) Laws which truly “govern” rather than merely describing the behavior of Nature—particular when such laws are supposed to include subjective consciousness as parts of their equations—cannot exist without a conscious “Lawmaker”, And •(2) Consciousness (first–person subjective qualitative experiences and intentionalistic thought) cannot be reduced to blind physical mechanism (and panpsychism is not a way around the need to get consciousness from blind physical mechanism), then it follows from these two simple premises that: either (A) God exists, or else (B) the stream of consciousness is eternal. For my part, I cannot find any way to plausibly or coherently reject either (1) or (2). This leaves me in the rather awkward position of having to either take an option I find literally incoherent, or else face up to accepting either (A) or (B). Yet, ironically, it seems to me that (B) is where the only salvageable attempt to make dualism “naturalistic” inevitably ends up. Ironically, this turns out to be exactly the worldview mentioned inadvertently in some of the opening stages of this series—that held by Samkhya–type schools of Hinduism, or by the somewhat more personalist varieties of Buddhism, in which there is a kind of mind–body dualism which allows for the possibility of reincarnation—without theism. Is it possible these actually plausible worldviews?

Now that the groundwork has been laid, get ready for this series to finally start getting truly “crazy.”

Consciousness (X) — The Nature of Scientific “Explanation” and the “Problem” of Interaction, pt.1

[Note: Also a crude rough draft that eventually needs more refinement than this.]

When we say that scientific analysis of a given physical phenomena “explains” it, what is it that we mean? Let’s take the analysis of the behavior of “water” in terms of chemistry as a paradigmatic example. When we speak of the behavior of “water”, we have in mind such phenomena as the fact that an object of sufficient density placed on the surface of water, for example, will “sink.” Now, to give a scientific “explanation” of this phenomena is to say that molecules of H2O bond together quite loosely, so that when collections of molecules more tightly bound come into contact with it, these more tightly bound molecules are capable of slipping through the gaps of space between molecules of H2O. [1]

Now, how then are the properties of H2O to be explained?  Once again, the behavior of H2O must be “explained” in terms of the structural properties of the composing entities one level down: in this case, atoms. And how are the properties of atoms to be explained? Once again, … you get the picture. Physical explanations work, in essence, by ‘zooming in’ on any given phenomena and going one level of reality ‘down’. Eventually, however, it stands to reason that we’re going to have to find the “rock bottom”—the unit which is, finally, indivisible into anything other than itself. For some time, we thought that these would be atoms—“atoms” were given the name, in fact, precisely because they were believed, at first, to be indivisible. However, we later discovered that they were divisible into the particles we call protons, neutrons, and electrons—and with the advent of quantum physics, we discovered that these were even further divisible into the categories of particles known as fermions (a category which includes quarks) and bosons. Currently, fermions and bosons are regarded as “elementary particles,” which means that we don’t know yet whether they represent “rock bottom” or are composed of any more basic particles or not—but for now, until we find something more basic, we can only assume that they are, in fact, rock bottom.

But suppose, as we investigated the properties of water, it had turned out that atoms themselves were simply the “rock bottom”—that atoms themselves had in fact been the ultimate, most indivisible thing in existence. Would that be ‘weird?’ Suppose it turned out that it was H2O. Would we feel like there was some mystery left over that we simply hadn’t, and couldn’t yet, explain about the world? How long would that continue to ‘bug us’ before we would give up and accept that H2O is ‘as far down as it goes?’ We can go even further with this thought experiment: what if it had turned out that the “rock bottom” thing was simply water itself? To say that water itself was the “rock bottom” thing underlying the behavior of water would simply be to say that no matter how much you try to divide water up, all you would get are still yet more units of water. Suppose that had been what we discovered. Would we be unsatisfied? Would we feel that something was fundamentally missing in our ability to use scientific investigation to understand the roots of the properties of water? I think it’s obvious that we would. We would be missing the only kind of “explanation” that science as we know it allows! We would just have to take the way water acts completely for granted, and there would be nothing else left for us to say about it to elucidate anything about “why” it behaves as it does. That would leave us in a particular kind of state of ignorance.

Yet the point we have to realize is that a world in which our ultimate “explanation” of the behavior of water simply did in fact have to stop with water itself is ultimately no different from the world we are in. The notion that the fact that water divides into units of something other than “water” (molecules), and that these units (molecules) divide into units of something other than “molecules” (atoms), and that these units (atoms) divide into units of something other than “atoms” (protons, neutrons, electrons)—and so forth—makes any of it less ultimately inexplicable is simply an illusion. We don’t understand the ultimate nature of our world any more than we would have if water itself had simply turned out to be the most indivisible thing composing “water”—and at least so far as scientific investigation can take us, we never will. Science can explain the relationship between the properties that a given physical entity has and the properties of the component entities that make that first entity up—but whatever the ultimate properties are, they can’t, by definition, be “explained” in this same way. And this is just as true whether the most basic entities are quarks, atoms, or even “water” itself. In the world where the most indivisible units of water turn out to be water, “Why does ‘water’ do what it does?’ remains a mystery. In the world where it turns out to be the case that water divides into more primitive units, which we call molecules, “Why does ‘water’ do what it does?” acquires an answer—but only at the expense of leaving us with the question, “Why do molecules do what they do?” which now remains every bit as unanswerable as the question “Why does ‘water’ do what it does?” was in the last world. And in the world where it turns out to be the case that molecules are divisible into more primitive units, which we call atoms, “Why do molecules do what they do?” acquires an answer—but only at the expense of leaving us with the question, “Why do atoms do what they do?” which now remains every bit as unanswerable as the question “Why do molecules do what they do?” was in the last world, and as “Why does ‘water’ do what it does?” was in the world before it.

The point here is much deeper than the mere fact that we can keep asking “why?” indefinitely. The point is that we naively assume that scientific explanation gets us further into “understanding” reality than it actually does. The ultimate situation we are in with regards to understanding reality in any of these scenarios ends up staying essentially exactly the same. If something turns out to be composed of divisible parts that have different properties from the composed whole, then we can “explain” how the properties of the pieces at the lower level necessarily result in the properties that we see at the higher level (e.g., we can learn that the properties of H2O molecules dispose them towards forming weak molecular bonds, and we can explain that these bonds necessarily result in a substance that things can “sink into” as groups of other molecule slip between these gaps even though nothing “sinks into” individual molecules of H2O). And we satisfy ourselves that this counts as some meaningful clarification of the nature of reality itself.

But what we fail to take account of is that if we postulate that all “explanation” must be of the properties of some whole in terms of the differing properties of some underlying compositional parts, then we eventually reach the “rock bottom” behind which there is no underlying compositional part—no matter what. And no matter when we reach this point, the situation is ultimately no different than it would have been had water itself simply been the thing that could not be divided into any more basic units than “water.” I expect that most of us generally feel that if we had been in the world in which “water” was not divisible into anything more basic than “water”—where these most basic parts were clear, wet, allowed other objects to “sink” into them, etc.—and looked, felt, and behaved exactly like the “water” we observe—so that the only thing left for “science” to do was to catalogue the behavioral properties of the same “water” anyone can see with his naked eye, and there was no room for “science” to clarify anything else—that this would be a world in which we “understood” less about the physical reality around us.

But why should the fact that water just so happens to divide into parts with different properties from water itself render our “understanding” of the nature of physical reality any deeper? We still can’t explain the behavior of those parts. If we think that what explanation consists of is reducing the terms of one event seen at one level of “zoom” into the underlying terms of another level, then eventually we reach the level that is supposed to be the one explaining all the rest—and once we get to that one, we‘re simply going to have no idea why it behaves the way that it does.

Scientific accounts can allow us to explain a in terms of b, and b in terms of c, and c in terms of d, … but once we get down to z, we have to stop there and take the behavior of z for granted and accept that z must necessarily remain absolutely unaccounted for. This is really not fundamentally any different from the situation in which a is explained in terms of b, and b in terms of c, and c in terms of d … but at d, we have to stop there and take the behavior of d for granted and accept that d must necessarily remain absolutely unaccounted for. And it isn’t really fundamentally any different from situation in which we simply have to take the behavior of a for granted and accept that a must necessarily remain absolutely unaccounted for. The only real differences between these scenarios are “How big are the most indivisible pieces of reality?”, or “How many times can we zoom in and find parts that individually have different properties from the thing we’re zooming in on?” But no matter how big the indivisible pieces are, and no matter how many times we can zoom in, the best answer “science” can give about the ultimate nature of physical reality is:

John R. Ross, in his 1967 Constraints on Variables in Syntax, tells a version of a story in which: “After a lecture on cosmology and the structure of the solar system, William James was accosted by a little old lady. “Your theory that the sun is the centre of the solar system, and the earth is a ball which rotates around it has a very convincing ring to it, Mr. James, but it’s wrong. I’ve got a better theory,” said the little old lady. “And what is that, madam?” Inquired James politely. “That we live on a crust of earth which is on the back of a giant turtle.” Not wishing to demolish this absurd little theory by bringing to bear the masses of scientific evidence he had at his command, James decided to gently dissuade his opponent by making her see some of the inadequacies of her position. “If your theory is correct, madam,” he asked, “what does this turtle stand on?” “You’re a very clever man, Mr. James, and that’s a very good question,” replied the little old lady, “but I have an answer to it. And it is this: The first turtle stands on the back of a second, far larger, turtle, who stands directly under him.” “But what does this second turtle stand on?” persisted James patiently. To this the little old lady crowed triumphantly. “It’s no use, Mr. James—it’s turtles all the way down.””

Scientific “explanation” of the nature of the physical world is “turtles all the way down”—right up until we suddenly reach the bottom turtle, and find that this one—the one on which all of the others are resting—is simply floating in mid–air, bearing the weight of all the others with the help and support of nothing. This is, quite plainly, no less mysterious than the case in which the only thing we ever had to begin with was a single levitating turtle. In fact, finding more turtles resting on that bottom turtle doesn’t make the question of the levitating turtle less mysterious—it makes it more so, because it not only keeps the question of how the turtle is levitating intact, it adds the question of how all these other turtles could be resting on a levitating turtle with nothing holding up the whole lot of them..

Now, the philosophical position I’ve been striving to give comprehensive defense to throughout this series has been that consciousness—the first–person subjective stream of qualitative experience and intentionalistic thought which we all experience the world exclusively in and through—can neither be eliminated, nor called “identical to” anything other than itself, nor “built into” the physical world along the lines of panpsychism, nor considered causally “epiphenomenal” with respect to the external world. This process of elimination entails that consciousness is therefore both something as fundamentally different from other physical phenomena as gravitational forces are from strong nuclear forces, and yet something which nonetheless somehow causally interacts with at least some of those other forces.

In physics, gravity and the strong nuclear force are called “fundamental forces” because they appear not to be reducible either to each other or to any other more basic kinds of force. In other words, when we come to these four kinds of forces, we have to accept that this is simply the point where we must be resigned to say: “shit do what it do”—because there is simply in principle nothing else that can be said. And as we saw above, even if two or more of these kinds of force are reducible to something else, then this can only be in terms of some other kind of force which will now have to be the “basic” one which we cannot in principle provide any account of, beyond simply cataloging what it does and then taking this catalog for granted as a primitive observation about how the world operates. In other words, no matter what the details are, at least something has to be “basic” in the sense that it is both irreducible and ultimately inexplicable (because there is nothing more basic which it can be explained in terms of—that is, “reduced to”).

Furthermore, given our current state of knowledge, we think that there are at least three or four such “basic” kinds of irreducible forces operating in the world. Even if this state of knowledge should eventually be overturned, and all four of these forces united into description in terms of some other singular underlying force, this still suffices to show that it is not inconceivable that there should be more than one “basic” kind of irreducible force, which are different in fundamental ways and yet still causally interact with each other within the same singular universe.[2]

Now, the “interaction problem”—the question of how a “nonphysical” mind could interact with a “physical” world—is often taken to be the most disastrous, fatal problem for this kind of position. There are more subtle and complicated forms of the “interaction problem” which I may discuss later, but in its most basic form the “interaction problem” aims to reject dualism by simply expressing incredulity that things that seem to be defined by such different kinds of properties could possibly interact with each other, in principle.

Part of the problem here is simply linguistic: when we discovered that electromagnetic forces were not reducible into terms of atomic interactions, but instead a whole new fundamental category of kinds of things in the world all their own right alongside atomic forces themselves, up to that point in history our definition of what it meant to be “physical” was encompassed by the properties we had noted by observing interactions between atomic particles. When we included electromagnetism into our account, however, we didn’t call electromagnetic forces “nonphysical”—even though it is plain to see that there is in fact a sense of the word “physical” in which electromagnetic forces are “nonphysical!”—rather, we expanded our definition of what it means for a thing to be “physical.” Why shouldn’t we do something similar here? The very word “dualism” itself reinforces the notion that this is off–limits, of course, suggesting as it does a “du–ality”—an etymological root which implies a specifically two–fold distinction—between “things” which are “physical” and “things” which are “non–physical.” But why think of this in terms of “du–ality?” Why not think of it in terms of plurality? In other words, why not think of it as the phenomena of consciousness and consciousness’ capacity to interact with other parts of the world standing right alongside the phenomena of gravity and gravitational forces, right along side atoms and strong and weak nuclear forces, and so on, all as equally irreducible categories of ‘kinds of things the world turns out to contain’ in their own right? Why not think of a multitude of phenomena existing and interacting within the world, each containing different fundamental properties, some overlapping and some not?

The problem with providing a clear definition of what it means for a thing to be “physical” without creating a straw notion of physicalism is often taken as a hurdle against attempts to refute the philosophical position of physicalism—but it is just as much a hurdle against attempts to establish it as a true position over and against alternative positions which could reasonably be called “dualistic”: a definition of physical must suffice to rule out the possibility of dualism turning out to be true in a non–question begging way every bit as much as it must suffice to rule out the possibility of physicalism turning out to be true, and physicalists rarely if ever perform better at this task than dualists. If the physicalist asks the dualist how to define “the physical” such that consciousness couldn’t be a “physical” process without begging the question, then the dualist has every right to turn exactly the same question around and ask the physicalist how to define “physical” such that irreducible consciousness couldn’t possibly qualify as “physical” and therefore be allowed to exist, in the way that the dualist believes that it does, without begging the question.

But it should be clear to see, in any case, how thinking of the phenomena of consciousness in this way renders the “problem” posed by standard forms of the “interaction problem” moot. When it comes to any “basic” irreducible force or phenomena in the world, the question of “how” it does what it does is always, in principle, necessarily mysterious. How does the gravitational force cause objects to gravitate towards large bodies in space? If your answer is that large bodies in space create curvature in the fabric of spacetime, then the question simply becomes: How does a large body in space cause the fabric of spacetime to curve? Again, the point is not merely that we can keep asking “Why?” forever. The point is that there necessarily must be some actual point at which the only question left to ask literally has no conceivable answer other than “that’s just how we observe that phenomena in the world behave”—at which point the Why–asking must stop because it cannot, in principle receive an answer—and the only open empirical question left is just simply, “When have we reached that point?”—“Is this the inexplicable rock bottom, or does rock bottom lie somewhere further?” And we reach this point necessarily any time we talk about the most basic actions and behaviors and properties of the most basic kinds of forces and entities in the world—whatever they may turn out to be.

In other words, the kind of account which the physicalist demands the dualist give to justify the claim that interaction between consciousness and the physical world could occur is one that no one can give for any phenomena in the world whatsoever—yet, if consciousness is indeed itself just such a “basic” phenomena, this is exactly the situation that should be expected. As James Moreland writes, “One can ask how turning the key starts a car because there is an intermediate electrical system between the key and the car’s running engine that is the means by which turning the key causes the engine to start. The “how” question is a request to describe that intermediate mechanism. But the interaction between [consciousness] and [the brain] may be … direct and immediate. [And if] there is no intervening mechanism, [then] a “how” question describing that mechanism does not even arise.”

The problem with this “intuitive” version of the “interaction problem,” I think, quite simply results from the fact that we can’t take a third–person view on someone else’s consciousness and visualize their subjective intentions playing a causal role in their ensuing physical behavior, in the way that we can at least visualize one billiard ball bouncing into another from the third–person point of view. Yet, I think Descartes himself adequately addressed this version of the problem all the way back in 1912: “At no place do you bring an objection to my arguments; you only set forth the doubts which you think follow from my conclusions, though they arise merely from your wishing to subject to the scrutiny of the imagination matters which, by their own nature, do not fall under it.” In other words, we can’t visualize conscious streams of experience existing in any way from the third–person point of view. And the key issue underlying this fact is one that is universal to all positions in philosophy of mind whatsoever—it is, in fact, exactly what makes consciousness seem mysterious in general, no matter what metaphysical view we take towards its ultimate nature: namely, physical properties as such and conscious experiences as such seem wildly unrelated no matter what “theory” we suppose for understanding their relationship.

If the world, at root, is a causally closed process of physical properties following patterns of inert cause and effect on other physical properties, then why the hell should experiences even squirt out of that epiphenomenally? How the hell does anyone ever get the idea in their heads that that in any way doesn’t face an “interaction problem?” It just postulates that the interaction goes in a single direction: from the physical to the experiential. But either interactions between the physical and the experiential can happen, or they can’t. If they can’t, then epiphenomenalism is ruled out as a conceivably true option every bit as much as dualism. And if they can, then there is no reason in principle why dualism couldn’t be true. So anyone who thinks it is even conceivable that epiphenomenalism could be true—and many physicalists are willing to grant that it could be, as a last resort—has no valid recourse to this “intuitive” version of the “interaction problem.” Whether we can only walk across it from left–to–right or we can walk in either direction we choose, a bridge is a bridge. And if we can walk across a bridge from left–to–right, then there can’t be any reason in principle why we couldn’t conceivably walk across it from right–to–left. It’s as if the physicalist who considers the possibility that epiphenomenalism might be true is an atheist who finally goes so far as to say, “Alright, God exists. But God can look into our world from His—we can’t ever travel over to His, in principle! So there still can’t possibly be a Heaven or a Hell!” Would anyone ever consider calling this “Atheism, Or Something Near Enough?”[3] Wouldn’t we think it was demonstrating something about the inherent weakness of atheism itself if atheists were, in any significant numbers, finding themselves compelled to retreat to this kind of position?

What we’re actually dealing with here is simple bafflement upon trying to imagine the two phenomena interacting. But is the idea of your subjective experience of the qualitative taste of a strawberry in and of itself playing a causal role in your later description of what the strawberry tasted like any more difficult or bizarre to imagine than the idea of a series of blind physical particles moving in space literally composing a subjective experience of the qualitative taste of a strawberry? I don’t think so. The intuitive weirdness of interaction can’t be any reason to weigh the scales against dualism if every picture of how consciousness and the physical relate we could possibly imagine is overwhelmingly weird to intuition. We especially can’t do so if we arrived at the hypothesis of dualism by a process of elimination composed of a series of arguments in which we found deductive reasons to reject alternative attempts—which is how do it. (So even if my reasoning in those steps turns out to be flawed at some point, those are the steps around which the whole issue pivots—and the “interaction problem” simply contributes nothing new that is important to the question.)

When I observe the properties of gravity, I take it for granted as a brute fact that a given equation describes the relationship between the mass of an object and the gravitational force it exerts—and how gravity causes an object to move remains inexplicable in principle. If I later discover that this works by objects influencing the curvature of space in proportion to their mass, then I take it for granted as a brute fact that an object of a given mass influences the curvature of space in a given degree—how an object of a given mass causes space to curve remains inexplicable in principle. It is, again, not simply that we can keep asking “Why?” indefinitely and eventually have to stop simply in order to move on to doing something with our knowledge—it is that knowledge itself necessarily reaches a brute stopping point in principle as soon as it arrives at the most basic behaviors and properties of the most basic entities that there are, and it simply remains an empirical detail to be fleshed out what these are. 

Supposing I suggest that the first–person subjective phenomena of consciousness itself is one of them, and supposing I suggest that it is a brute fact about consciousness that it possesses the property of intentionality—of being intrinsically “reflective of” or capable of “representing to itself” or “directing itself towards” an external world—and that as one of the “basic” actions of consciousness, subjectively created and experienced intentions can influence my objective physical behavior—then so long as I do not try to visualize this in terms that would only be appropriate for mechanical interactions between blind physical particles in the first place, there is simply no real conceptual problem here—the fact that intentions influence behavior should be treated as a basic data of first–person awareness in just exactly the same way that the mass of a physical object creates gravitational forces by  influencing the curvature of the surrounding space should, and I no more need “an account” of how the former happens in order to be fully justified in believing that it does than I do in the case of the latter.

And herein lies one significant dialectic difference between interactionism and “identity theories,” etc.: the fact that there is a relationship between physical states and experiences—from the looks of things, in both directions—is a direct datum of experience. The claim that the physical brain composed of parts which are non–qualitative, non–experiential, and non–intentionalistic generates or is identical to consciousness itself (qualitative experience and intentionalistic thought) is not. So first, we do not have the same prima facie justification for believing that the production of consciousness by the brain happens (or that consciousness is “identical to” the physical brain) that we have for the fact of interaction in the first place. But second, the physicalist does not have the option of making these “brute fact” posits in the same way that the dualist does—the very definition of the physicalist claim is that the physicalist himself rules this option out: quite simply, consciousness is not a “basic” entity within the physicalist’s scheme—and it therefore cannot be said to possess “basic” properties. So the physicalist actually holds a burden of “explaining” the appearance of consciousness in a way that the dualist does not—because according to physicalism, consciousness is a secondary, derivative phenomena—which therefore according to the physicalist scheme itself must in fact have an explanation in some other terms—hence the Hard Problem. Thus, there is simply no special burden on the dualist to provide an account of interaction—just as there is no special burden on someone who proposes that gravitational forces are the curvature of spacetime to explain how an object’s mass ‘causes’ spacetime to curve. There is a burden to motivate dualism—just as there is a burden to motivate the claim that gravity does in fact work by causing curvatures in the fabric of spacetime—and I think that this burden can be met (as it can for gravity).

Notice that this is exactly why the gravitational force itself is currently considered to be a “basic” fundamental force in the universe, and why science places the burden squarely on whomever wants to propose a “theory of everything” that reduces gravity to some other more basic force: the claim that gravity is reducible needs to be demonstrated before we can be justified to believe it. And until then, there is simply no a priori reason to suppose that it has to be—no a priori reason that gravity can’t just be the “basic” phenomena itself—so we can’t assume that a “theory of everything” will necessarily succeed until someone actually spells out the details of how the forces we know and currently consider fundamental reduce to some other. This is the most reasonable way to think—yet it is a rule that we violate flagrantly when it comes to consciousness, usually with appeals to some notion of “parsimony.” But—rightly—no one reasons in the same way when it comes to gravity. No one says that it is more “parsimonious” to assume that a “theory of everything” that reduces gravitational forces to some other more basic force must be true. Certainly, no one does so even because the notion of discrete matter interacting with space interacting with time is too weird to accept. The burden is squarely upon the “reductionist” to actually perform the work of demonstrating how gravity “reduces.” And until then, we rightly assume that it doesn’t (pending further notice of an actual proof of how it does.)

When the physicalist supposes that consciousness is not in fact a basic phenomena, and instead of being analogous to the brute existence of the gravity as a basic, fundamental force is instead analogous to the behavioral properties of water, which exist solely in virtue of the very different behavioral properties of molecules of H2O (which water could be described equally as either “identical with,” “emergent from,” or “reducing to”), thus does in fact create a burden of providing an explanation of “how” consciousness appears in the course of this process—because the physicalist himself is the one making the supposition that consciousness appears through some intermediary process. The mechanism of its appearance must therefore be explained—because the physicalist himself, if he is not an outright eliminativist (or panpsychist), is the one supposing that there is a mechanism mediating the process whereby ingredients which are not conscious go through some process to somehow become conscious which should, therefore, be explicable. Immediately, it is not clear how these unspecified mechanisms (which somehow produce something radically unlike themselves in kind, per any definition of the physical and the experiential besides that of the panpsychist who supposes that consciousness utterly pervades the physical world or that of the eliminativist who supposes that no one actually has any subjective experiences or intrinsically intentional states at all) are supposed to be more “parsimonious” than the posit that consciousness itself is simply fundamental—just as it is clear that a “theory of everything” is not a priori more “parsimonious” than the posit that gravity itself is simply fundamental. Especially so when we have no idea what the supposed mechanisms are, how they work, or how they could even conceivably relate the terms of items so radically different (per everyone but the panpsychist or the eliminatist’s definition).

But this opens up an even further, an even stronger argumentative possibility that does not exist against the posit that consciousness itself is a “basic” phenomena in the world: namely, that we might pose a successful argument to defeat the claim that such a mechanism could ever in principle succeed to do what is claimed for it, for all the reasons summarized here and explained in more detail in essays (IV)—(VII) of this series—in short, because if the physicalist posits that all that the world contains at root is mechanism, then this supposition is incapable in principle of predicting anything other than further mechanisms—and a description of the qualitative nature of subjective experience and the intentionalistic nature of conscious thought simply can’t be built up to through descriptions of the mechanisms that happen to accompany experience and thought (a summary this brief can’t come anywhere close to doing the argument justice), any more than there is some special way of drawing lines on a flat two–dimensional canvas that can build up to a fully–fleshed three–dimensional object. In just the same way that the very nature of a three–dimensional object includes a category—the third dimension—that can’t be “reached” in principle through the two dimensions provided by the canvas’ surface, so consciousness includes categories (subjectivity and intentionality) that can’t be “reached” in principle through the mechanisms provided by the physicalist’s ‘objective’ blind mechanical processes. But comparatively, there is no reason in principle why interaction between consciousness and the physical world cannot occur—the question asks something that commits a category error within the very question, essentially similar to asking how an object’s velocity at one moment causes it to keep moving through space in the next moment—and draws its intuitive force simply by asking us to imagine interaction in an inappropriate way: from the third–person perspective, which is exactly where the dualist suggests that consciousness cannot be seen in the first place.

Relocating the act of imagining to the first person perspective, there is no intuitive problem: I set an intention to move my hands, and they move. This is just as direct a piece of data in my immediate awareness as any data about any external phenomena like gravity could ever possibly be. Justifying the claim that conscious experiences and physical particles are “identical” would take a lot more than simply insisting that we can’t visualize that sort of interaction taking place from the third–person stance—when the whole core of the dualist insight to begin with is precisely to see that the entire phenomena of consciousness can’t be found “from the third–person stance” in the first place. If the inability to visualize a process is enough to defeat a position in philosophy of mind, then my arguments against physicalist views become even stronger, because so far as it is they’re based on positive arguments that we can’t visualize mind–brain “identity” because the claim is incoherent for principled reasons. If all it takes for a successful argument is difficulty with visualization, then these other arguments go well beyond the burden required of them, and physicalist views would be all the more disqualified at exactly the same time by the same stroke.

The second part of this post will discuss a refined version of the argument against the possibility of interaction which presents a much greater threat—and indeed, may be the one and only real empirical threat that dualism has ever actually faced. In his discussion of his view of the problem interactionism poses, Dennett devotes just two sentences to this aspect of it, though I consider it by far the most significant: “A fundamental principle of physics is that any change in the trajectory of any physical entity is an acceleration requiring the expenditure of energy, and where is this energy to come from? It is this principle of the conservation of energy that accounts for the physical impossibility of “perpetual motion machines,” and the same principle is apparently violated by dualism.” This line of argument actually attempts to take specific principles which seem well justified by our scientific observations of the world and argue that interactionism seems to require that we haphazardly violate them. A premise like this could actually provide well motivated grounds for offering specific empirical reasons why a “naturalistic” approach to understanding the nature of human consciousness and the relationship between the “mind” and the brain cannot be “dualistic” which go beyond mere verbal sleight–of–hand through question–begging definitions of terms like “natural” and “physical.”

We’ll see in a future post how this much more troubling version of the argument fares.

  _______ ~.::[༒]::.~ _______

[1] This is a simplified example, since only one illustration is needed for the purpose of demonstrating the underlying point we’re discussing here.

[2] See here for an overview of scientific attempts to achieve this. There is no currently plausible grand unified theory—grand unified theories attempt to unite electromagnetic with strong and weak nuclear forces, while leaving out gravity—the goal of a “theory of everything” which incorporates gravity into the analysis presently seems even more implausible to achieve. It may eventually happen; but again, there is simply no a priori reason to assume that it necessarily must.

[3] The materialist–turned–epiphenomenalist Jaegwon Kim’s book is titled “Physicalism, Or Something Near Enough.”

New EP: “In Appreciation of Limits”

A lot of this recording is rough, as I’ve been too rushed to really sit down and perfect any of these songs, so I allowed a fuck of lot of small errors to stay in and this is basically a way–more–lazy–than–I’d–prefer “first take”.

Honestly, once the first track starts getting weird and too quiet, … just skip to the third.

The first track starts with a brief 40 second sample illustrating my basic style of song–writing: simple guitar parts that overlap in interesting ways to create something that feels more intricate than the sum of its parts. I like the parts to be simple enough that I can hear them all individually, but mesh together in a way that makes listening to all of them at once a more involved, meditative sort of act. Eventually (if the urge strikes) I may expand that basic template into a more developed song. I really, really love the feel of it and I’m probably going to expand on it whenever I get a decent chance to sit down and really work it out. (For now, I’m sending my recorder across the country ahead of me before I take off on my own journey to where it’s headed.) The rest of the track after that 40 seconds verges into an alternate tuning transcription on acoustic guitar of The Weeknd’s “The Town” (actually, when I recorded the brief demo, I didn’t realize this was on the rest of the track. So that’s why the volume is screwed up—and now I’m out of CD–R’s. I’m sorry. You can hear it well with headphones, though.)

The next two tracks are a two–part series that starts with an … industrial beatbox? intro and then verges into a creepy, slightly Azam–Ali–inspired vocal–backed spoken word drawn from something I found I had written a long time ago and couldn’t for the life of me remember anything about (I’m guessing I was probably ridiculously high when I wrote it down). The second part takes the template of the rhythm from the spoken word and transforms it into an acoustic instrumental. (At which point it sucks way less)

I’ll be posting more “EP–style” collections of a few short songs like this at a time in the future.

Listen here: In Appreciation of Limits

Is the War on Drugs Racist? The Surprising Truth Behind the Black Curtain of History

What about the drug war? The notion that the drug war in particular is especially racist is one that is widely accepted across the whole political spectrum. Michelle Alexander, author of ‘The New Jim Crow: Mass Incarceration in the Age of Colorblindness’ writes that “The drug war was motivated by racial politics, not drug crime … [it] was launched as a way of trying to appeal to poor and working class white voters, as a way to say, “We’re going to get tough on them, put them back in their place”—and ‘them’ was not so subtly defined as African–Americans.” Articles in Time Magazine tell us that “black youth are arrested for drug crimes at a rate ten times higher than that of whites. But new research shows that young African Americans are actually less likely to use drugs and less likely to develop substance use disorders, compared to whites….” Even Ron Paul, known for a series of newsletters containing statements like “Carjacking … It’s the hip-hop thing to do among the urban youth who play unsuspecting whites like pianos,” believes it. Quote: “ … minorities are punished unfairly in the war on drugs. … blacks make up 14% of those who use drugs, yet 36% of those arrested are Blacks and it ends up that 63% of those who finally end up in prison are Blacks. This has to change. … We need to repeal the whole war on drugs.”  [1]

Yet, we saw in the last post that there is very good reason to believe that, whatever the ultimate cause, African–Americans are not arrested disproportionately to their numbers in other crimes. Contrary to popular impression, drug policies are responsible for very little—relatively speaking—of the disparity in incarceration rates between whites and blacks. The fact that incarceration rates for crimes involving drugs correspond so closely to these general incarceration rates should, in and of itself, immediately make us skeptical of the claim that African–Americans are imprisoned so disproportionately here. (It will turn out that a surprising hell of a lot is simply wrong about the data underlying this claim.) Stephan and Abigal Thernstrom, in the 1997 America in Black and White: One Nation, Indivisible – Race in Modern America provide us with some of the relevant numbers: “In 1980, before the antidrug crackdown, African Americans were 34.4 percent of the inmates in federal prisons, and 46.6 percent of those in state penitentiaries. By 1994, their share of the federal prison population had risen only slightly, to 35.4 percent. Among state prisoners, the black proportion rose three points between 1980 and 1993 to 49.7 percent.”

They continue: “The black prison population would be smaller, but not much smaller, if the drug laws were different—if, for example, crack and powder cocaine were treated identically. But a calculation made on data for prisoners newly admitted to penitentiaries in thirty–eight states in 1992 indicates that if the percentage of black men serving drug sentences had been reduced to the figure for white men, the black proportion of the total would have fallen from 50 percent to 46 percent. Again, not a trivial difference, but hardly a monumental one. A similar calculation done for prisoners in federal facilities yields even less of a difference; we could have made the proportion of black males sent to federal prisons for drug offenses in 1994 identical to the proportion among white males by setting free just 855 African–American men, a mere 3 percent of those sent to a federal penitentiary that year.”

The charge that the criminal justice system as a whole is racist, then, simply cannot hang on the rate of arrests for drug use—at worst, drug arrests are an exception to an overall pattern of racially disproportionate arrests which are justified by the racial proportions of crime. But critics who argue that the criminal justice system as a whole is racist almost always extend this argument to the racial breakdown of arrests for crime in general as well, and as we saw in the previous post, this claim is put down by the fact that victim and witness reports—who have every possible reason not to lie—indicate a larger racial disproportion in rates of crime than are suggested by police arrest rates. The fact that critics who make charges of racism are wrong in the case of all other crimes should make us skeptical of the notion that this is the sole case in which they suddenly have it right, and the fact that the racial proportion of arrests for drug crimes so closely matches the corroborated proportion of arrests for other crimes should instantly make us skeptical that this is the sole case in which the same rates of arrest are suddenly so wildly disproportionate.

  _______ ~.::[༒]::.~ _______

But before we get into the statistics, a timeline of relevant history is in order.

In 1956, the white Reverend Norman C. Eddy of the East Harlem Protestant Parish “opened a store–front drop–in center and a clinic where addicts received a physician’s services, referrals to hospitals, assistance in job searches, psychological counseling, and legal assistance for those facing criminal charges.”

Quickly, the EHPP became overbooked: “with a staff of three … the EHPP storefront recorded visits from 2,175 individual users (or 5 percent of the nation’s addicts, according to FBN statistics).” Faced with these prospects, the EHPP joined with five other programs to become the New York Neighborhoods’ Council on Nacotics addiction in 1959—and after successfully lobbying state government for additional hospital beds and after–care programs for detoxified users, went on to help pass  the Metcalf–Volker Narcotic Addict Committee Act signed by Governor Nelson Rockefeller that allowed addicts arrested for possession to choose in–patient treatment in a state hospital rather than jail. [2] Yet, despite their efforts, “A study of a single block (100th Street between First and Second Avenues) that followed residents over four years found that one–third of the sixty teenagers interviewed in 1965 had become heroin users by 1968.” [2] A staggering half of the entire nation’s addicts lived in New York State by the end of 1963—almost 23,000 of the nation’s 48,000. [3]

Over time, it became more and more apparent that these social programs weren’t resolving the problem. As black citizens continued to take the brunt of the impact of social dysfunction from rampant drug abuse, the tone—driven by black backlash against what was considered to be a failed liberal approach to addiction and crime, and police apathy towards the plight of black victims of crime and drug abuse—became increasingly militant. I draw here as one of my primary sources particularly of citations of newspaper clippings that can’t be found online from Michael Javen Fortner, the African–American Assistant Professor and Academic Director of Urban Studies at the Murphy Institute at CUNY SPS. Many of these will be taken from his The Carceral State and the Crucible of Black Politics: An Urban History of the Rockefeller Drug Laws, which “examines how African American mobilization for greater public safety in Harlem shaped the evolution of narcotics–control policies in New York State from 1960 until 1973,” objecting to “prevailing theories [of the origins of the drug war which overemphasize] … the exploitation of white fears … or the political strategies of Republican political elites,” and “ignore the ‘invisible black victim.’”

In 1961, Mark T. Southall, member of the Urban League and NAACP, tells a Democratic Party hearing that “[Harlem] is slowly and surely becoming a cesspool of the dreadful narcotics racket … Churches are constantly being robbed by addicts … Ministers and other citizens of the community are being mugged, beaten, and robbed by addicts, who also are guilty of rapes, pickpocketing and many other crimes, daily and nightly.” [4] In 1962, the Reverend Oberia Dempsey led a seven–week drive to “Urge the president to mobilize all law enforcement agencies to unleash their collective fangs on dope pushers and smugglers … urge Governor Rockefeller to also push a similar crackdown … [and] spur Mayor Wagner and Police Commissioner Michael Murphy to turn loose the city’s police … [on] narcotics dealers.” [5] In 1968, Dempsey (who “always carried a .32–caliber pistol … even in church”) recruits “volunteers from among retired policemen, guards and others who had been trained and held pistol permits” to take immediate action to repel “pushers.” The theory that activists like Dempsey were ‘uncle Toms,’ Fortner writes in “‘Must Jesus Bear the Cross Alone?’: Rev. Oberia Dempsey and His Citizen’s War on Drugs”, cannot explain his movement’s “grassroots character … the petitions signed by thousands, the marches and rallies, the letters to editors, appearances at hearings, town halls, and emergency meetings.”

An article published by Ebony in 1970 discusses State Senator Waldaba Stewart’s support for “groups in Harlem … known as Black Citizen Patrols. The no–nonsense groups have served notice that “we’re going to have to keep the heat on every spot that’s well–known as a dope drop. … we document an area as a drug drop. Then we turn our report over to the police. If nothing happens, … we barricade the place … Our last step is to have citizen arrests made by our members who are off–duty black policemen.” Similar articles over this period of time include “Harlem Vigilantes Move On ‘Pushers’” published in the Chicago Daily Defender in Jun. 23 of 1965, and “Addicts’ Victims Turn Vigilante” published in the New York Times in 1969. [6]  The Black Liberation Army ran a campaign to “Deal with the Dealer” by identifying the “hangouts” of prominent drug dealers and manufacturers and raid them. [7] In some cases, drug dealers were killed—both Assata Shakur and Hubert Gerold Brown, chairman of the Student Nonviolent Coordinating Committee, were involved in trials related to underground attacks on drug activity in black communities.[8] A New York Times report warned in 1968 that Harlem “could become a community of gunfighters, reminiscent of the Old West, if the law failed to protect black citizens from outlaws.”

 _______ ~.::[༒]::.~ _______

In 1970, the Congressional Black Caucus took one of its first formal actions when the 12 black members of the U.S. House of Representatives met with President Nixon under that name, presenting him with a document which outlined 61 recommendations they requested the President consider—an opportunity the members had requested since the year previous when the organization was first founded as the “Democratic Select Committee,” and which they obtained only after having taken the dramatic and unprecedented step of boycotting the President’s State of the Union Address. In the document, they wrote: “We strongly urge that drug abuse and addiction be declared a major national crisis. … Since organized crime is the principal distributive mechanism of hard narcotics, we urge that Justice Department manpower for investigation and prosecution in that area be substantially increased.”

 That same year, Congress passed the Comprehensive Drug Abuse Prevention and Control Act, containing the Controlled Substances Act—“the legal foundation of the government’s fight against the abuse of drugs and other substances… regulating the manufacture and distribution of narcotics, stimulants, depressants, hallucinogens, anabolic steroids, and chemicals used in the illicit production of controlled substances.” Of the ten African–American representatives in Congress at that time, three of the five who voted (Robert N. C. Nix, Sr.; George W. Collins; and Shirley Chisholm) voted ‘Yea.’ (John Conyers and Bill Clay voted ‘Nay.’ William Dawson, Adam Clayton Powell, Jr., Charles Diggs, Gus Hawkins, and Louis Stokes abstained.)

Ironically, it was in 1972 that a group composed mainly of white conservatives recommended the legalization of marijuana. Governor Raymond P. Shafer, chairman of Nixon’s National Commission on Marijuana and Drug Abuse (created by the Controlled Substances Act) wrote in the final report that “Neither the marihuana user nor the drug itself can be said to constitute a danger to public safety … [T]he criminal law is too harsh a tool to apply to personal possession even in the effort to discourage use. … The actual and potential harm of use of the drug is not great enough to justify intrusion by the criminal law into private behavior, a step which our society takes only with the greatest reluctance.” Public support for legalization remained under 20% for the most part until 1993, and so far as I can tell there was no significant African–American advocacy either within or outside of Congress for the measure.

However, 1973 was the year a very different law that was established in the State of New York—which did receive widespread and notable support from a large chunk of African–Americans, leaders and public alike—marked a significant historical turning point in the history of the war on drugs. In 1962, it had been New York Governor Nelson Rockefeller who signed the Metcalf–Volker Act in response to the petitions of the Reverend Norman C. Eddy and others, allowing arrested addicts to choose in–patient treatment in a state hospital over jail. Rockefeller had taken office expressing staunch opposition to “[conservative] extremists [who] feed on fear, hate and terror … [and] have no program for America—no program for the Republican Party … no solution for our problems of chronic unemployment, of education, … or racial injustice….”

But in December of 1965, Rockefeller had begun holding meetings with “Harlem officials and a follow–up closed session with an influential group of Negro leaders” to discuss the rising drug problem. In a unity of middle– and lower–class black interests, these “influential group[s]” included members of the St. Philip’s Episcopal Church whose “members were considered ‘the better element of colored people’” [10] as well as members of Salem Methodist, which was described as refusing to cater to “the tastes of the black bourgeoisie.” [11] Less than a month later, when Rockefeller delivered his message to the opening session of the legislature in the new year, his tone had changed: now he spoke of the need “to act decisively in removing pushers from the streets and placing addicts in new and expanded state facilities for effective treatment, rehabilitation, and after care.” [12]

So in 1966, he passed the Narcotic Addict Rehabilitation Act which allowed, for the first time, for addicts to be compulsively treated if they had been accused of a crime (and allowed magistrates to compel treatment in a civic center even if they had not)—but even a year after this bill had apportioned the state with $75 million for the creation of rehabilitation centers, the complaints and problems continued. In 1967, residents sought meetings with the Police Commissioner Howard R. Leary, complaining that drug–related crime “forced merchants to close their shops early and brought armed civilian patrols into the streets”—while very clearly blaming “addicts for the purse snatchings, the muggings, the burglaries and the beatings.” [13] Still in 1968, the pastor of Harlem’s Second Friendship Baptist Church estimated that “90 percent of the people refuse to come out at night … even on Sunday…” in fear of drug–related violence. [14]

After analyzing homicide data from 1950–1980, Charles Murray writes that “it was much more dangerous to be black in 1972 than it was in 1965, whereas it was not much more dangerous to be white.”  Then, as now, the majority of victims of minority acts of violence were minorities themselves: “In New York City, seventy percent of the victims of homicides, muggings, and narcotics pushers were African Americans and Puerto Ricans.” [9] In 1970, “Thirty three percent of nonwhites identified drugs and crime as major issues while only 18 percent of the entire sample [skewed by that 33% of nonwhites] mentioned either drugs or crime”. [15] And in 1973, 71% of blacks favored life sentences without parole for “pushers.” [16]

One of Rockefeller’s closest aides and speech–writers, Joseph Persico, tells the story on pp.142–144 of The Imperial Rockefeller: A Biography of Nelson A. Rockefeller of how, in 1972, Rockefeller encountered William Fine, the president of a department store and chairman of a rehabilitation program whose son struggled with addiction, and asked Fine to visit Japan to learn why the nation had one of the world’s lowest addiction rates. In his response to Rockefeller, Fine wrote: “The thing that impressed me most of all is the single minded conviction they have that public interest is above human rights when it comes to an evil. … the human rights of those who get involved in narcotics, or push narcotics, are brushed aside—quickly, aggressively, and with little or no recourse…It is incredible to me that they have had such success, but then, it really all comes down to what people are willing to give up to get, and the Japanese, obviously, were willing to give up the soap box movement on human rights in order to rid the public of the evil abuses of drugs.”

And so it came to pass that in 1973, Governor Nelson Rockefeller’s drug laws were passed in New York, marking a dramatic change in the history of the “war on drugs”—they were the first to promote harsh penalties and mandatory minimum sentences for possession. As this insightful paper notes, “Governor Nelson Rockefeller did not root his campaign for harsh new drug laws in the politics of white racial backlash. Instead, he championed the laws by publicizing their endorsement by several African American community leaders from Harlem”—such as those covered by the articles in Ebony magazine in 1970 and the Chicago Daily Defender in 1965. Whereas the paper notes that “… liberals and Democrats were equal partners in embracing and promoting law and order in the 1960s and 1970s and creating the laws that led to mass incarceration,” the Wikipedia entry lists libertarian economist Murray Rothbard, conservative public intellectual William F. Buckley, and “many in law enforcement” (along with civil rights activists) as some of the most notable opposition.

In fact, when William F. Buckley held debate against the war on drugs in 1991, his opponent in the debate was Charles Rangel—a black Democrat representing Harlem. In the debate, he asks Buckley: “Why is it that when we talk about this drug problem … you put on blinders, and you find … one of the things that is not working … Why do you just say ‘legalize?’ Why don’t you talk about education? If we were not making progress in the Middle East, because the Army was not moving forward, but the Air Force was actually doing a tremendous job, would you say ‘eliminate the Army?’” Later, in 1989, Rangel was profiled by Ebony magazine, where he was called a “front–line general in the war on drugs”—and the article quotes him as condemning what he called Nixon’s “lackadaisical attitude” towards drugs.

The response of Glester Hinds, the head of Harlem’s People’s Civic and Welfare Association, to the new law? “I don’t think the governor went far enough … his bill [should include] capital punishment because these murderers need to be gotten rid of completely. Yet because of the bleeding hearts that we have, the legislators try to be pacifistic in having laws that do not work.” When NYU Law School Professor and former staff attorney for the NAACP Leroy D. Clark spoke against the measure, he acknowledged that he spoke against a large percentage of the black community: “…[We] must be vigilant and keep our eye on what may be someone’s hidden agenda … I ask for a restraint, which our communities now do not feelbecause they feel the community is being immobilized by the addict.” [17]

As Michael Javen Forner writes in Invisibility and Imprecision in the Historiography of Mass Incarceration, “Although blacks constituted 14% of New York City’s population in 1960 and around 19% in 1970, they constituted a disproportionate share of deaths due to drugs, representing anywhere from 50% to 60% of all such deaths from 1960 until 1973. In fact, this rate dips below 50% only after the passage of the drug laws in 1973.”

_______ ~.::[༒]::.~ _______

A key thread running throughout much of black sentiment across these periods of time was that society’s demonstrations of racism were in its not giving enough of a damn to do something to stop the epidemic because it didn’t care enough to do something to help black victims—exactly the opposite of how the situation is viewed today. In 1970, an Ebony magazine piece covering grassroots efforts to fight drug use in black communities begins by noting that Mothers Against Drugs—“which urges community people to record the names, addresses, and license plate numbers of known traffickers, suppliers, and pushers” sends this information directly “to the district attorney’s office”—spitefully skips over local police because they believe resentfully that police “simply don’t care about drugs in black communities.“ 

It opens in the first paragraph with this quote from a grieving mother: “You know the best way to deal with the dope problem? Get as many white kids on it as possible! The best news I’ve heard in a long time is that more white kids are getting hooked on heroin. If I had the money I’d buy it and give it to them free!” 

The Knapp Commission provided some empirical support for this perception when, in 1972, it investigated police corruption and concluded that the biggest problem was with the “overwhelming majority … who accept gratuities and solicit five– and ten– and twenty–dollar payments … but do not aggressively pursue corruption payments.” The report noted that: “At the time of the investigation certain precincts in Harlem … comprised what police officers called “the Gold Coast” because they contained so many payoff–prone activities, numbers and narcotics being the biggest.”

 _______ ~.::[༒]::.~ _______

The story still wasn’t over.

The late 1970’s and early 1980’s saw the rise of crack cocaine, and a rise in drug–related crimes once again came with it.

Alfred Blumstein and Joseph Wallman write in their 2006 volume, The Crime Drop in Americathat “A focus on New York City is easily justified by its bellwether role in national drug and violence trends and its hugely disproportionate numeric weight in those trends” In chapter 6, “The rise and decline of drugs, drug markets, and violence in New York City,” (pp.164–206) the authors document that the epidemic in New York City “peaked” between 1987 to 1989—when 70% of all arrestees tested positive for either crack or powder cocaine in urinalysis. Across the fifteen years between 1960 and 1975, there were an average of just 1,066 murders per year. But from 1975 to 1986 when the law was passed, there were an average of 1,941 murders per year—almost twice as many. “U.S. Sentencing Commission statistics show that 29 percent of all crack cases from October 1, 2008, through September 30, 2009, involved a weapon, compared to 16 percent for powder cocaine;” and it is plausible, especially in light of all the other facts listed here, this association was also true in the past.

In 1982, the Congressional Black Caucus released the “Black Leadership Family Plan for the Unity, Survival and Progress of Black People.” The document, penned by civil rights icon and DC representative Walter Fauntroy—who led the prayer at Dr. Martin Luther King, Jr.’s funeral—includes criticism that “diminished drug enforcement increases [black youth’s] vulnerability to drug abuse” and warns that the “incidence of crime in black communities is increasing because of intentional and unintentional failure on the part of law enforcement agencies to provide adequate protection”—finally urging police, once again, to “increase drug enforcement efforts.”

Ta–Nehisi Coates, in The Beautiful Struggle: A Father Two Sons, and an Unlikely Road to Manhood discusses (on pp.29–30) his own recollection of the time period: “When crack hit Baltimore, civilization fell. Dad told me how it used to be. In his time, the beefs were petty and stemmed from casual crimes. … The bad end of a beef was loose teeth and stitches, rarely shock trauma and “Blessed Assurance” ringing the roof of the storefront funeral home. … The world was filled with great causes … But we died for sneakers stitched by serfs, coats that gave props to teams we didn’t own, hats embroidered with the names of Confederate states. I could feel the falling, all around. The flood of guns wrecked the natural order.” In 1987, two veteran civil rights activists, Reverend Hosea Williams and comedian Dick Gregory, began a 40–day fast camping alternately outside the White House, U.S. Capital, and New York Stock exchange to protest drug abuse and “send a telegram to President Reagan asking him to commit more Federal money to the fight against drug abuse.” In a later, 1986 speech, Fauntroy declared that “Drugs—and now ‘crack’—are indeed the source of threat to all civilized society and each of us must accept 100% of the responsibility for eliminating this threat in our midst….” And it was in 1986 that the first major piece of federal drug war legislation, The Anti–Drug Abuse Act, created the well–known 100–to–1 crack–cocaine sentencing disparity.

Returning to America in Black and White, the Thernstroms write: “Critics of the war on drugs … allege [that this policy was] blatantly racist, because crack tends to be used by blacks and powder cocaine by whites. If so, it is certainly peculiar that the Congressional Black Caucus backed the law, and that some of its members proposed even tougher penalties on crack. They knew that crack was much more common in black neighborhoods than in white ones, and that more blacks than whites were likely to be incarcerated as a result of the change. And in fact, that was precisely their reason for supporting the legislative change: a conviction that it might reduce the havoc on the streets where their constituents lived.” Of twenty–one black members of Congress at this time, seventeen are listed here as co–sponsors of the bill: Charles Hayes1, Alton R. Waldon, Jr.2, Mitchell Parren3, Charles B. Rangel4, Harold Ford, Sr.5, Julian C. Dixon6, William H. Gray III7, Mickey Leland8, Mervyn M. Dymally9, Major R. Owens10, Edolphus Towns11, Alton R. Waldon, Jr.12, Bill Clay13, Cardiss Collins14, Ronald Dellums15, Louis Stokes16, and Walter Fauntroy himself17. Only Gus Savage, Alan Wheat, George W. Crockett, Jr., and John Conyers fail to make the list—whether by ‘Nay’ or abstention is unclear.

The years of 1975–1986, as previously noted, saw an average number of 1,941 murders in the State of New York per year. But by 1995, the number had returned closed to the previous rate—1177—and has continued falling since then, with an average between 1995 and 2014 of just 634 murders per year. In fact, the years 2013 and 2014 both saw a total of less than 328 murders each.

 What happened?

Franklin Zimring, professor of law and chairman of the Criminal Justice Research Program at the University of California at Berkeley, discusses the rise in crime experienced during the second half of the 1980’s, and the drop in crime experienced during 1990–2000 in The City That Became Safe: New York’s Lessons for Urban Crime and Its Control. Notably, Zimring is no “law–and–order” conservative—his 2003 book The Contradictions of American Capital Punishment notes that the death penalty has been most actively used by the same states in which the most lynchings historically occurred. The City That Became Safe argues against the infamous “broken window theory,” advanced by conservative social scientist James Q. Wilson, that harsh treatment of low–level offenses was responsible for drops in crime across this periods of time. He critiques James Q. Wilson’s claim (on pp.83–87) that an increase in youth would lead to proportionate increases in crime.

He also argues (pp.90–99), correlating hospitalizations and deaths from overdose with changes in the known street price, that overall use of cocaine appears to have remained relatively constant [Update 4/20/2016: However, see this footnote] across the period of time in which New York City’s crime drop took place. Yet, he notes (pp.91–92) that “The peak rates of drug–involved homicide occurred in 1987 and 1988”—the same year that 70% of arrestees were found to test positive for cocaine—“and the drop in the volume of such killings is steady and steep from 1993 to 2005. … The volume of drug–involved homicides in 2005 is only 5% of the number in 1990.” Meanwhile, whereas 70% of arrestees in the late 1980s tested positive for cocaine, by 1991 (see table 2 on page 14) this number hit a low of 62%—and in 1998 it had fallen all the way to 47.1%. By 2012 (see figure 3.7 on page 45) this number fell even further to 25%.

What happened here? Why would drug use amongst arrestees fall if drug use as a whole remained constant? Zimring has an important answer: “If I’m a drug seller in a public drug market and you’re a drug seller in a public market, we’re both going to want to go to the corner where most of the customers are. But that means that we are going to have conflict about who gets the corner. And when you have conflict and you’re in the drug business, you’re generally armed and violence happens. … Policing … [helped drive] drug trade from public to private space. … [this] reduced the risk of conflict and violence associated with contests over drug turf. The preventive impact [of these policies] on lethal violence seems substantially greater than its impact on drug use. … [And] once the police had eliminated public drug markets in the late 1990s, the manpower devoted to a special narcotics unit [whose funding had increased by 137% between 1990 and 1999] dropped quite substantially [and yet the policies’ impacts on homicide rates remained].”

Critics of the drug war often imply that drug–related violence is a result of the criminalization of drugs creating black markets. The history of New York seems to suggest exactly the opposite: drugs created drug–related violence and turf wars; and the existence of these is exactly why black victims of drug–related violence agitated originally for increased penalties towards drugs.

Furthermore, this fact gives us one reason minorities may be legitimately arrested in disproportionate numbers for possession of drugs even if total rates of use are in fact constant: minorities are disproportionately involved in public drug trades where violence and turf wars are more likely to occur. In decrying James Q. Wilson’ “broken window theory,” he emphasizes another important way that drug policy impacted crime: “Marijuana was not a priority of the New York City police, yet they had a huge number of public marijuana arrests. Why was that? That was because they were only arresting minority males who looked to them like robbers and burglars and they used as a pretext the less serious crime arrest to find out whether the particular person they were arresting had a warrant out for a felony and was a bad actor. … The good news is that drug violence went down tremendously. There are a couple of different ways in which the police department measures the number of killings associated with drug traffic in New York; both of those measures that they use are down more than 90 percent so that the streets themselves have been changed, people can walk there, and the number of dead bodies associated with illegal drug traffic has gone way, way down.

Regarding marijuana arrests, Zimring notes that “While the gender distribution of marijuana users is close to 50–50, the gender distribution of arrests is 93% to 7%”—which parallels the disproportionately male gender distribution of crime. In other words, the gender distribution of marijuana arrests and the gender distribution of crime parallel each other in the same way that the the racial distribution of marijuana arrests and the racial distribution of crime do—and no one ever assumes that “stop and frisk” policies are an expression of anti–male, or misandrist, gender bias. Zimring concludes: “This is only circumstantial evidence that the police are going after robbery risks, but it is conclusive evidence that they aren’t trying to go after marijuana as a threat to the quality of life.” Indeed, even critics of these policies acknowledge that “Marijuana stops are more prevalent in precincts where… “high–crime area” justifications are more likely to be reported….” Critics may be right that marijuana stops are only a “pretext” for the real reasons for the arrest—but the real reasons just might be valid suspicion of crime, which correlate with race simply because different racial groups do in fact commit different proportions of the total amount of crime, and not bias on the basis of race alone—no more than the policy’s gender imbalance proves that it is a simple pretext for targeting men simply because society despises masculinity. If drug arrests take place on valid grounds of suspicion of criminal behavior, then this may, in fact, be one valid reason for the racial percentages of drug arrests to exceed the racial percentages of drug use even if the former is in fact disproportionate to rates of personal use. 

 _______ ~.::[༒]::.~ _______

That brings us to the question of the policies known as “stop and frisk.”

On Tumblr, the author of ‘Racism Still Exists’ gives us the usual story—black people are stopped and frisked disproportionate to their representation of the population, and that disparity is all it takes to reach a conclusion of racism: “Black people comprise 26% of the city, but they are 52% of those who are stopped.  On the other hand, White people are 47% of the population, but they are only 9% of those who are stopped.” What goes ignored in this comparison is, as usual, the actual murder and crime rate—quoting Heather MacDonald: “Blacks are 66 percent of all violent–crime suspects, according to the victims of and witnesses to those crimes. Blacks commit around 70 percent of all robberies and about 80 percent of all shootings in the city. Add Hispanic shooters, and you account for 98 percent of all shootings in the city. Whites, by contrast, were only 5 percent of all violent crime suspects in 2011. According to victim and witness reports, white suspects commit barely over 1 percent of all shootings and less than 5 percent of all robberies.” Thus—once again—the actually relevant comparisons suggest that it is whites who are “victimized” disproportionately to their actual representation of the crime rate. If we call a suspect who is unlikely to actually be involved in a crime an unjustified suspect, then there are in fact statistically more unjustified white suspects than there are unjustified black suspects inconvenienced by “stop and frisk” policies.

The author also tells us, citing this New York Times article, that “In Brownsville, residents stated that they were frequently stopped and or ticketed for entering their own or friends’ homes in public housing because they did not use a key—but that was because the front door lock was broken.” Of course, Brownsville’s demographic is 76.7% black and only 2.6% white. But the author fails to mention that Brownsville ranks 69th out of all 69 boroughs in New York for murder. Kensington, a borough in Buffalo, has the lowest murder rate and has a similar racial demographic spread: 82.3% black and only 11.5 white. Yet just 2% of the population of Kensington is stopped and frisk, compared to 29.1% for the population of Brownsville. The chart on page four makes it clear that Brownsville has the highest, and Kensington the lowest, stop–and–frisk rates of all boroughs in New York. Once again, this corresponds exactly to the murder rate across New York, which is highest of all in Brownsville and lowest of all in Kensington. The rate does not increase in Kensington because it has a higher proportion of black resident—it falls, because it has a lower rate of murder and crime. (Intriguing note: Jewish neighborhood watch groups called Shomrim are known to conduct patrols in Kensington, and coordinated a 5000–person volunteer search for a missing boy in coordination with police in 2011—then sought and successfully found his killer.)

During the 60’s, 70’s and 80’s, police apathy towards minority victims of minority crimes was the target of accusations of racism. Today, we see that police enthusiasm, too, is unacceptably racist. But if “stop and frisk” policies work, then their primary beneficiaries are, in fact, minorities. Just as minorities commit a disproportionate amount of the United States’ crime, so too are they disproportionately the victims of it. Heather Macdonald writes: “Blacks, for example, constituted 78% of shooting suspects and 74% of all shooting victims in 2012, even though they are less than 23% of the city’s population. Whites, by contrast, committed just over 2% of shootings and were under 3% of shooting victims in 2012, though they are 35% of the populace. … Minorities make up nearly 80% of the drop in homicide victims since the early 1990s.”

Do “stop and frisk” policies actually work? In The City That Became Safe, Franklin Zimring notes that there were, at the time of his writing, no studies that adequately controlled for the other policies he documents which changed across the same period of time. Today, however, Colin Lubelczyk writes: “The only study that explicitly poses the question “Does stop and frisk stop crime?” was an unpublished paper by Robert Purtell and Dennis Smith that relied on monthly precinct level data from New York City from 1997 to 2006. After controlling for a large number of variables including the effects of hotspots policing, Purtell and Smith found that stop and frisk helped reduce robbery, burglary, murder, and grand larceny … While other researchers have looked into similar questions as the one posed by Purtell and Smith, they do not isolate stop and frisk as a variable and instead combine its effects with other police strategies like 1) firearm reduction, 2) hot spots policing, and 3) order maintenance policing.” As Heather MacDonald concludes, “To be sure, thousands of innocent New Yorkers have been questioned by the police. Even though such stops may have been justified given the information the officer had at the time, they’re still humiliating and infuriating experiences. But if the trade–off is an increased risk of getting stopped in a high-crime neighborhood versus an increased risk of getting shot there, most people would choose the former.”

 _______ ~.::[༒]::.~ _______

In his discussion of claims that mass incarceration is ‘the New Jim Crow’ in “Racial Critiques of Mass Incarceration: Beyond the New Jim Crow,”  James Forman, Jr.—son of civil rights leader, Student Nonviolent Coordinating Committee, and Black Panther member James Forman; maternal grandson of 1960s investigative journalist and active Communist Party member Jessica Mitford—writes, “One of Jim Crow’s defining features was that it treated similarly situated blacks and whites differently. … But violent crime is a different matter. While rates of drug offenses are roughly the same throughout the population, blacks are overrepresented among the population for violent offenses. For example, the African American arrest rate for murder is seven to eight times higher than the white arrest rate; the black arrest rate for robbery is ten times higher than the white arrest rate. Murder and robbery are the two offenses for which the arrest data are considered most reliable as an indicator of offending.” His purpose here isn’t to point a finger at drug arrests per se so much as it is to discuss the phenomena of “mass incarceration” on its own terms, but he goes on to distinguish drug arrests from arrests for all other crimes: “Because the [Jim Crow] analogy leads proponents to search for disparities in the criminal justice system that resemble those of the Old Jim Crow, they confine their attention to cases where blacks are like whites in all relevant respects, yet are treated worse by law. Such a search usefully exposes the abuses associated with … the drug war,” although “it does not lead to a comprehensive understanding of mass incarceration.”

Yet, we have seen several reasons to be skeptical of these claims: first, the percentage of minorities arrested for drug–related crime is not different from the percentage of minorities arrested for violent crimes in general—and as we saw in the last entry, “Are African–Americans Disproprotionately Victimized by Police?”—and as Forman Jr. agrees—we have overwhelming reason to believe that these arrest rates are in fact not the result of racism, but simple direct response to the percentage of crime African–Americans actually commit: victims report a higher percentage of African–American perpetrators than are arrested by police. It would be incredibly strange, then, if drug arrests are the sole category in which African–Americans are suddenly disproportionately arrested. If racism is the cause for this disproportionality, then why aren’t African–Americans arrested disproportionate to their actual crime rate for violent crimes? Why would this racism suddenly appear in the sole case of drug use, and vanish as soon as a black person burglarizes a home (as we saw in the last entry, “victims tell police that 45 percent of the perpetrators were black, but only 28 percent of the people arrested for that crime were black”)?

Furthermore, it was African–American victims themselves who historically led the most notable efforts to increase the attention law enforcement gave to the epidemic of drug–related crimes, and underlied the most significant historical changes in public drug policy—and racism was then alleged on the basis that law enforcement, and white society in general, didn’t give enough of a damn to do anything about them because they weren’t having to deal with the consequences. 

Where have we gone wrong?

The problem is this: we’ve acquired our estimates of who uses what drugs how frequently by self–report.

Self–reports are notoriously inaccurate. People often have incredibly poor memories—oftentimes, they’re even dishonest. Frequently, they report what they want to believe instead of what actually happened. Reliance on inaccurate self–reports of dietary intake is, in fact, one reason that dietary recommendations are so often contradictory over time. 60% of people who call themselves “vegetarian” ate a hamburger within the last 24 hours. A 2015 paper in the International Journal of Obesity writes of the reliance of self–reports in obesity research that “[The data] are so poor as measures of actual [energy intake] and [physical activity] that they no longer have [any] justifiable place in scientific research.” In an announcement from 17 members of the American Society for Nutrition titled ‘Self-report–based estimates of energy intake offer an inadequate basis for scientific conclusions,’ the authors write that “the magnitude of the bias may even have increased in recent years”—“motivated,” as they put it, “by social desirability:” in other words, because people say what they want to believe. And people apparently want to believe that they’re eating less now even more than they did in the past. Other research finds that people with a “diagnosed medical condition” are much more likely to overreport their meat intake—a fact that may have caused meat intake to become more associated with illness in epidemiological studies than it really is by sheer statistical fluke. Women were even found to be more likely to under–report their meat and calorie intake than men (a fact which had been studied elsewhere).

We shouldn’t take self–reports about diet naively for granted. In fact, many argue that we should recognize that they’re damn well outright useless—and it is even well–documented here that demographic characteristics such as illness and gender influence an individual’s accuracy in reporting. Why should we take self–reports for granted in the case of drugs?

In fact, we have studies establishing that demographics correlate with accuracy in self–reported drug use as well.

A 2008 “Comparison between self-report and hair analysis of illicit drug use in a community sample of middle-aged men” determined that “Discrepancies between biological assays and self-report of illicit drug use could undermine epidemiological research findings. … Male participants followed since 1972 were interviewed about substance use, and hair samples were analyzed …. Self-report and hair testing generally met good, but not excellent, agreement. Apparent underreporting of recent cocaine use was associated with inpatient hospitalization for the participant’s most recent quit attempt, younger age, identifying as African–American or Other, and not having a diagnosis of antisocial personality disorder. … African–Americans in comparison to Caucasians who were urine positive were about 6 times less likely to report cocaine use when other factors are controlled for.” 

A 2005 study, “Race/Ethnicity Differences in the Validity of Self–Reported Drug Use: Results from a Household Survey,” found “evidence that compared with other groups, African Americans may provide less valid information on drug–use surveys. The findings suggest that African American respondents had significantly lower concordance rates. … Mediation was found in one model (cocaine) for one variable (SES), which may suggest some limited support for the cultural deficit model. Nevertheless, the finding that SES was not a consistent mediator of underreporting … [and] in general, none of the theories of mediation received strong support from this evaluation. Overall, the results replicate and extend a growing body of research suggesting that African Americans under–report substance use on surveys.” The 2005 study, in other words, found that even socio–economic status did little to diminish the impact of race in mis–reporting drug use on surveys. But as far as the raw numbers “without mediating effects entered, compared with African Americans, Hispanics have two and one–half times the odds of providing concordant responses … and Whites have over 25 times the odds of providing concordant responses.”

In fact, the findings of these study weren’t even new. All the way back in 1992, a study found that “ … intravenous drug users who were black or whose primary drugs of choice were injected cocaine and crack were more likely than other groups to misrepresent their current drug use status.” In 1994, a study of “The validity of drug use reports from juvenile arrestees” found that “Race/ethnicity [was] the most important predictor of cocaine use disclosure among those testing positive for this drug.” Yet, “Comparing the validity of self-reported recent drug use between adult and juvenile arrestees” finds that “adult arrestees are even more inclined to underreport their recent use of illicit drugs [than youth].”

And even beyond variability in the accuracy of self–reported drug use, The Department of Justice notes an important fact about variability in the self–reports themselves—it turns out that (in 1995) even if black and white respondents did in fact admit to using drugs at equal rates, they weren’t admitting to using the same amount of drugs“Among black drug users, 54% reported using drugs at least monthly and 32% reported using them weekly. Such frequent drug use was less common among white drug users. Among white users, 39% reported using drugs monthly and 20% reported using them weekly.” The pattern can still be found in the data from 2011 (chart taken from here), where the race ratio of admission of illicit drug use in lifetime begins at 17 whites for every 15 blacks—drops to 14.9 whites for every 15 blacks when considering admission of drug use in the past year—and finally inverts to 9 whites for every 11 blacks when admission of drug use in the past month is considered. The 2011 data doesn’t mention admission to drug use in the past week, where the ratio would likely invert even more so. And these are the same reports we have good reason to believe African–American respondents less frequently report accurately to in the first place.

If self–reports are a terrible way to estimate actual rates of drug use, then, do we have anything better?

It turns out that we do. While also imperfect, one of the most reasonable methods that we do have of estimating the racial breakdown of drug use in the general population is by looking at data on the racial breakdown of admissions to hospitals—and subsequent medical reports in cases of death—for illicit drug use, which data is recorded each year by the Substance Abuse and Mental Health Services Administration (SAMHSA), a branch of the U.S. Department of Health and Human Services.

In 1994, reports found that amongst white patients admitted to emergency room visits in cases involving drug use, 14% mentioned the use of cocaine, while 8.4% mentioned the use of heroin, and 6.8% mentioned the use of marijuana. Amongst black patients, these numbers change to a whopping 54.5% for cocaine, 18.4% for heroin, and 10.7% for marijuana. Numbers for Hispanic patients are inbetween the white and black rates for cocaine (26.5%), more similar to black patients’ reported menton of heroin (18.7%), and more like white patients’ reported mention of marijuana (6.2%). Across the years of 1988–1994, an average of about 33,000 white emergency room visitors mentioned use of cocaine per year. Meanwhile, in the same years, an average of about 59,000 black emergency room visitors mentioned cocaine. Very crudely, if whites were about 75% of the population and blacks were about 12%, for a population of ten million this would mean that about 0.4% of the white population of 7.5 million and about 4.9% of the black population of 1.2 million were using cocaine—more than a tenfold difference. 

For heroin? An average of 18,000 white visitors per year mentioned it, compared to 17,500 black visitors. Plugging our simplified numbers back in, this would mean about 0.24% of the white population and about 1.46% of the black population used heroin across this period of years—more than a sixfold difference. An average of 11,250 white visitors mentioned marijuana, compared to an average of 8,250 black visitors—0.15% of the white population versus 0.688% of the black population—a 4.5 times larger percentage of the black population. In 1995, medical examiners in cases of death reported cocaine (see table 42) in 32.8% of deceased whites, compared to 69.6% of deceased blacks. Cocaine was reported in deceased Hispanics by medical examiners inbetween the white and black rate, at 55%; but heroin was highest for deceased Hispanics of all, at 56% (compared to 44.3% for deceased whites and 43.8% of deceased blacks). Across the years of 1987–1995 (see table 43), the Arrestee Drug Abuse Monitoring Program found that an average of 33.3% of white arrestees tested positive in urinalysis for cocaine, whereas an average of 62.6% of black arrestees did.

In 2011, we can update these statistics to the following (with the caveat that in about 15% of emergency room visits, race is unknown): Of 505,224 ER visits for cocaine, 185,748 (36.7% of total) were white and 236,089 (46.7% of total) were black—a 27% increase. Altogether, of 1,252,500 ER visits for illicit drug use (including alcohol), 634,593 (50.7% of total) visitors were white, and 384,317 (30.7% of total) were black—a much larger black–white ratio than the 12%–77% ratios African–Americans and Caucasians respectively compose of the general population. While it is difficult to nail an exact estimate of the racial breakdown of drug use throughout the general public with these numbers, they are most certainly better than rates of self–report, and they most definitely indicate that rates of drug use are in fact higher among African–Americans in general (and for cocaine in particular). Yet, even if African–Americans are simply more likely to use drugs in irresponsible ways resulting in medical problems or death, this too would suggest a very high probability that these individuals are likely using drugs in more generally dangerous—as well as publically visible—ways that would either justify, or help explain, why a higher percentage end up arrested.

Methamphetamine is one drug for which the vast majority of self–reported use, arrests, and hospitalizations are of white users—just 5.6% of black visitors who were admitted to emergency rooms for drug–related issues mentioned it in 1994, for example, compared to 70% of white visitors. In fact, this relatively corresponds to the rates of arrests for methamphetamine: “In 2006, the 5,391 sentenced federal meth defendants [nearly as many as the 5,619 crack defendants!] were 54% white, 39% Hispanic and 2% black.” Furthermore, “the federal methamphetamine–trafficking penalties … are identical to those for crack.  [Yet] no one calls the federal meth laws … anti–white.”

According to a 2011 report to Congress on the impact of mandatory minimum policies on federal sentencing, “Approximately two–thirds of the 23,964 drug offenders in fiscal year 2010 were convicted of an offense carrying a mandatory minimum penalty. More than one–quarter (28.1%, n=4,447) of drug offenses carrying a mandatory minimum penalty involved powder cocaine, followed by crack cocaine (24.7%, n=3,905), [and then] methamphetamine (21.9%, n=3,466)…. The application of mandatory minimum penalties varies greatly by the type of drug involved in the offense. For example, in fiscal year 2010, a mandatory minimum penalty applied in 83.1 percent (n=3,466) of drug cases involving methamphetamine.  Crack cocaine (82.2%) and methamphetamine cases (83.2%) had the highest rates of offenders convicted of an offense carrying a mandatory minimum penalty.” If racism were the cause for the crack–powder cocaine sentencing disparities—which we have already seen is historically misleading anyway—why would racist policymakers turn around and then institute equally severe penalties for a drug overwhelmingly used by whites? The report further notes that “The average sentence for methamphetamine offenders who remained subject to a mandatory minimum penalty at the time of sentencing … was 144 months, which is the highest average sentence for any drug type.” Even further, “Black methamphetamine offenders convicted of an offense carrying a mandatory minimum penalty and subject to the mandatory minimum at sentencing had the lowest sentences, on average, of any racial group (131 months)”—compared to 143 months for whites. Methamphetamine would seem to be an excellent case study for testing whether we treat drugs like cocaine more seriously as an excuse for jailing African–Americans, or simply because we treat hard drugs in general seriously—regardless of who uses them most.

_______ ~.::[༒]::.~ _______

[1] It should go without saying that the question I am interested in here is not whether the drug war is an effective policy in general—only whether it’s enforcement is, either in intent or in practice, “racist”. One could perfectly well accept the conclusion that the drug war is not “racist” and still believe that its repeal would be beneficial for black and white Americans alike—I take no position on this question here, except to note that the commonly cited “decriminalization” policies in Portugal treat drug use as a non–criminal, medical health issue, but did not decriminalize drug dealing (in drug policy debates, this is in fact the definition of the word “decriminalization;” but this technical distinction is typically lost on the general reading public, and frequently left unexplained in articles addressing the subject).  Furthermore, correlation–causation questions are also more complicated than often assumed where the question of what impacts Portugese drug policy changes have had per se is concerned.

[2] Smack: Heroin and the American City by Eric C. Schneider, p.130–131; p.133

[3] “Organized Crime and Illicit Traffic in Narcotics,” Hearings before the Permanent Subcommittee on Investigations of the Committee on Government Operations, United States Senate (Washington, DC: U.S. Government Printing Office, 1964), 760.

[4] “Mark T. Southall, Leader in Harlem,” New York Times, 30 Jun. 1976, 35; “Mark Southall Dead, Former Assemblyman,”New York Amsterdam News, 3 Jul. 1976, A1.] Amongst his requests was “mandatory prison sentences for convicted dope pushers….” [“Southall Hits Drugs In Harlem,” New York Amsterdam News, 16 Dec. 1961, 13.

[5] “Dempsey Gratified in His Anti-Dope Drive,” New York Amsterdam News, 1 Sept. 1962, 23.

[6] “Harshest in the Nation: The Rockefeller Drug Laws and the Widening Embrace of Punitive Politics,” by Jessica Neptune

[7] Sundiata Acoli’s August 15, 1983 testimony in United States v. Sekou Odinga et al., in Sundiata Acoli’s Brinks Trial Testimony, a pamphlet published by the Patterson (New Jersey) Black Anarchist Collective, p. 21.

[8] Kes Kesu Men Maa Hill, Notes of a Surviving Black Panther: A Call for Historical Clarity, Emphasis, and Application (New York: Pan-African Nationalist Press, 1992), p. 71; Dhoruba Bin Wahad, interviewed by Bill Weinberg, “Dhoruba Bin Wahad: Former Panther, Free at Last,” High Times 241 (September 1995), < HIGHTIMES.COM > Mag. html > [Accessed January 12, 1999]; Assata Shakur, op. cit., pp. 162-72; Clayborne Carson, in Struggle: SNCC and the Black Awakening of the 1960s (Cambridge: Harvard University Press, 1995), p . 298.

[9] Jack Newfield, “My Back Pages,” Village Voice, January 18, 1973

[10] Gilbert Osofsky, Harlem: The Making of a Ghetto: Negro New York, 1890–1930 (New York: Harper & Row, 1996), 115.

[11] Cary D. Wintz and Paul Finkelman, Encyclopedia of the Harlem Renaissance, vol 1. (New York: Routledge, 2004), 272.

[12] “Excerpts from Governor Rockefeller’s Message Delivered to the Opening Session of the Legislature” New York Times, 6 Jan. 1966, 16.

[13] Earl Caldwell, “Group in Harlem Ask More Police,” New York Times, 4 Dec. 1967, 1.

[14] Homer Bigart, “Middle-Class Leaders in Harlem Ask Crackdown on Crime,” New York Times, 24 Dec. 1968, 46.

[15] Richard Reeves, “Survey Confirms Politicians’ Views of Attitudes of Ethnic-Group Voters,” New York Times, 25 Oct. 1970, 1.

[16] Maurice Carroll, “After Crime, Big Issues Are Prices and Fares,” New York Times, 17 Jan. 1974, 36; David Burnham, “Most Call Crime City’sWorst Ill,” New York Times, 16 Jan. 1974, 113; Nathaniel Sheppard, “Racial Issues Split City Deeply,” New York Times, 20 Jan. 1974, 1.

[17] Leroy Clark, “What Does Civil Liberties Mean in the Drug Context?” Amsterdam News, January 13, 1973.

Are African–Americans Disproportionately Victimized by Police?

In general, the most obvious and stubbornly ignored problem with “anti–racist” sociological analysis is that the treatment of a given demographic is simplistically judged to be fair or unfair according to one shallow dimension: whether or not treatment of that demographic matches that demographic’s representation of the population. The basic problem with this sort of reasoning is obvious to anyone who looks at it for a second without blinders: imagine someone making the argument that there is rampant bigotry and discrimination against men in the United States justice system, and that they drew this conclusion solely because men represent 90% of the U.S. prison population despite being only 50% of the general population. No one would fail, for a second, to notice that the crucial step missing in this argument would be: “Fine, but how much of the violent crime is that male 50% of the population committing? Who says it isn’t somewhere around 90%?” And no one would consider this question to constitute bigotry against men. As a man, no one will consider me “self–hating” for expressing the opinion that this would be a reasonable and justified question. No one will worry that talking about or acknowledging this statistic will perpetuate “stereotypes”, say, that men are as a rule “simply more violent” than women—in fact, no one will be even slightly opposed to considering the possibility that this could turn out to be empirically true. No one will take offense to it.

 _______ ~.::[༒]::.~ _______

One of the most common memes in popular consciousness regarding the relationship between crime, police and race is expressed by the phrase, “driving while black.” The phrase sardonically implies that black drivers are arrested so disproportionately to their numbers that this can only be because operating a vehicle while black is literally a crime.

Not everyone who uses the phrase is aware of its origins. Writing for NewsOne, Al Sharpton recalled: “In the 1990s … I was among those in the civil rights leadership that raised the country’s awareness on the outrageous policing practice of racial profiling, which systematically singled out minority drivers and disproportionately pulled them over on America’s roadways. … Through my organization … we were able to show that motorists of color were overwhelmingly harassed (…) [and w]hile pushing for reforms, we popularized the phrase … “driving while Black.””

The case Sharpton refers to here took place on the New Jersey Turnpike (see this article from 1993). As it is one of the few cases where we have a comprehensive and objective way to compare actual racial differences in behavior to disparate treatment by police, it serves as an excellent starting point for our discussion, and an excellent demonstration of how deeply utterly unfounded accusations of racism can become implanted in popular awareness as unquestionable truth. In 1995, a New Jersey state judge threw out charges against fifteen black drivers who, in the judge’s judgment, had been pulled over without proper cause. During the ensuing trial it emerged that, on a 26–mile long stretch on the south end of the New Jersey Turnpike, minorities accounted for a full 46 percent of drivers stopped for speeding—and the case for lawsuits against the New Jersey police was bolstered.

Of course, the crude opening assumption held within the trial was the same held today in equivalent contexts by “anti–racists”:  if black drivers are 13% of the population and 23% of those arrested for speeding, then this can only mean police are deliberately singling out black drivers because of their race. As the trial came to a close, New Jersey officials were ordered to collect more data on rates of speeding on the Turnpike—which the Justice Department clearly expected to bolster its charges of profiling. So the Public Service Research Institute used specially designed radar gun cameras to catch automated photographs of speeders. In the end, 38,747 photographs were captured, and of these only 26,334—in which at least two out of three evaluators (who were not told which subjects had or had not been speeding) were able to agree on the race of the photographed individual—were used for the analysis. What did it find?

In the southern segment of the turnpike which had been the primary subject of the previous lawsuit and in which the bulk of the stops had occurred, where “speeding” was defined as driving at least 80mph in a 65mph zone, 2.7% of black drivers were found speeding compared to 1.4% of white drivers. In other words, black drivers, who were 16% of drivers on the turnpike, were also 25% of the speeders—and when subjects were restricted to a speeding rate of at least 25 miles above speed limit (that is, 90mph or more), the disparity was even greater.

The Justice Department’s pathetic response was to block the release of the study, making a handful of obviously desperate arguments against the validity of the findings: the main argument offered by Mark Posner, who asked the state attorney general’s office to withhold the study, was that results may be skewed by the removal of photos affected by windowshield glare from the analysis. Why would anyone expect windowshield glare to affect races differently? And even if it did, why would anyone expect it to affect white drivers in particular several times more often than it affects blacks?! Such spurious objections were finally abandoned, and the study was finally allowed to be released—but not until 2005; three full years after it had been completed (read it here).

From the abstract: “Racial profiling is often measured by comparing the racial and ethnic distribution from police stop rates to race and ethnicity data derived from regional census counts. However, benchmarks may be more appropriate that are based on … the population of traffic violators. … The results revealed that the racial make–up of speeders differed from that of nonspeeding drivers and closely approximated the racial composition of police stops. Specifically, the proportion of speeding drivers who were identified as Black mirrored the proportion of Black drivers stopped by police.” That is what the most thorough analysis of the most objective possible data—recorded by automated machines, which can’t very plausibly be accused of racist bias; and classified so thoroughly that more than 12,000 out of 38,000 uncertain photographs were thrown out just to be safe—found. Black drivers were being stopped more frequently because they were speeding more frequently—period. Had the Public Service Research Institute not been ordered to collect this information after the fact, the claim that the New Jersey Police Department was practicing egregious racial profiling on the Turnpike would most likely have continued to stand without serious challenge.

 _______ ~.::[༒]::.~ _______

Similarly, in all the discussion of the disparate treatment of minorities by police, what almost every conversation systematically lacks is any contextual awareness of how much crime minorities are or aren’t responsible for in the first place. Now, as then, most analysis of the relationship between race and crime simply compares a groups’s statistical outcomes to their representation of the population—which completely leaves aside the only relevant question: how does the group comprising that percentage of the population behave on average? Again, no one would ever fail for a moment to keep this question in mind if the topic were the crime or imprisonment rates for men in general. 

And the truth is that African–Americans are (like men) responsible for an extremely disproportionate amount of crime in the United States in general. Contrary to a popular impression, we are not reliant solely on arrest rates themselves to determine the relative rates of criminal offending here—so police bias does not enter as a confounding figure into the calculation. A primary source for reliable data here is, in fact, in reports from victims and witnesses themselves.  Using the National Archive of Criminal Justice Data to analyze data from the National Crime Victimization Survey (NCVS), this report[1] finds that: “For the most recent report, the government surveyed 149,040 people about crimes of which they had been victims during 2003. They described the crimes in detail, including the race of the perpetrator, and whether they reported the crimes to the police. The survey sample, which is massive by polling standards, was carefully chosen to be representative of the entire US population. By comparing information about races of perpetrators with racial percentages in arrest data from the Uniform Crime Reports (UCR) we can determine if the proportion of criminals the police arrest who are black is equivalent to the proportion of criminals the victims say were black.

UCR and NCVS reports for the years 2001 through 2003 offer the most recent data on crimes suffered by victims, and arrests for those crimes. Needless to say, many crimes are not reported to the police, and the number of arrests the police make is smaller still. An extrapolation from NCVS data gives a good approximation of the actual number of crimes committed in the United States every year. The NCVS tells us that between 2001 and 2003, there were an estimated 1.8 million robberies, for example, of which 1.1 million were reported to the police. The UCR tell us that in the same period police made 229,000 arrests for robbery. Police cannot make an arrest if no one tells them about a crime, so the best way to see if police are biased is to compare the share of offenders who are black in crimes reported to the police, and the share of those arrested who are black. Figure 1 compares offender information to arrest information for all the crimes included in the NCVS. For example, 55 percent of offenders in all robberies were black, 55.4 percent of robbers in robberies reported to police were black, and 54.1 percent of arrested robbers were black.”

What this implies for the justification for supposing police disproportionately target minorities for arrest is surprising: “For most crimes, police are arresting fewer blacks than would be expected from the percentage of criminals the victims tell us are black (rape/sexual assault is the only exception). In the most extreme case, burglary, victims tell police that 45 percent of the perpetrators were black, but only 28 percent of the people arrested for that crime were black. If all the NCVS crimes are taken together, blacks who committed crimes that were reported to the police were 26 percent less likely to be arrested than people of other races who committed the same crimes. These figures lend no support to the charge that police arrest innocent blacks, or at least pursue them with excessive zeal. In fact, they suggest the opposite: that police are more determined to arrest non–black rather than black criminals.”

While it may appear to be a glitch in this analysis that “more crime victims report crimes to police when the criminal is black than when he is of another race”, the reason for this appears to be that: “NCVS victims are more likely to call the police about more serious crimes within the same category—for example, if a robber had a gun or a knife. According to NCVS victims, blacks are nearly three times more likely than criminals of other races to use a gun and more than twice as likely to use a knife. Therefore, even within the same crime categories, blacks are committing more serious offenses—which makes it even more striking that police are less likely to arrest them than criminals who are not black.”

To be clear, these facts hold regardless of the reason why they are true—no implication that blacks are ‘more dangerous’ when all else is held equal is necessary in order for this point to stand on its own irrefutable merits. One partial explanation of higher crime rates in black communities, for example, is surely that most crimes are committed across the peak ages of 15–25—and the African–American population skews closer towards this younger demographic than others (as of 2011, more than 50% of the African–American population was under 18 years of age). Even still, if—as I could accept—there is no independent “race factor” in perpetration of crimes whatsoever, and the real explanation of “higher African–American criminality” is in some incidental factor like the younger age structure of the African–America population, then it would follow that the explanation for disproportionate African–American encounter with police, too, is in the younger age structure of the African–American population (or whatever other factor or combination of factors might be deemed most relevant)—and not in discrimination on the basis of race.

In closing, I note this study from 2013: No evidence of racial discrimination in criminal justice processing: Results from the National Longitudinal Study of Adolescent Health — “One of the most consistent findings … is that African American males are arrested, convicted, and incarcerated at rates that far exceed those of any other racial or ethnic group. This racial disparity is frequently interpreted as evidence that the criminal justice system is racist and biased against African American males. Much of the existing literature purportedly supporting this interpretation, however, fails to estimate properly specified statistical models that control for a range of individual-level factors… …This racial disparity … was completely accounted for after including covariates for self–reported lifetime violence and IQ.”

 _______ ~.::[༒]::.~ _______

Thus, a 2002 study in the American Journal of Public Health found that from 1988 to 1997, the death rate due to what the CDC calls “legal intervention” was three times higher for blacks than for whites: “Of the 5486 total deaths due to legal intervention during the 19–year period 1979 to 1997, 5330 decedents (97%) were male. Whites accounted for 3447 deaths (63%), Blacks for 1885 deaths (34%), and “others” for 154 deaths (3%). … mortality rates for both White and Black males were highest in the 20–to 24–year-old age group … [which] roughly parallels the age distribution of death rates for homicides due to all causes, which peaks at 15 to 24 years….” Of course, once again, no one thinks to take a study like this as evidence for systematic bias against all men; and few would doubt—much less consider it sexist—that differences between male and female behavior during interaction with police is a major factor in this statistic.

However, once again, when we control for the actually relevant data—violent crimes committed—as a proxy measurement for interactions with police (which is, as we have seen, well empirically supported) we find the following:

2012 Violent Crime Rate (per 100,000):
Non–Black: 122.7
Black: 465.7

2012 Deaths By “Legal Intervention” (per 100,000):
Non–Black: 0.15
Black: 0.32

(Sources: 1, 2)

Placing the two figures together, we get figures for deaths by “legal intervention” per violent crime.

Non–Black: (122.7/.15) =
1 non–black death by “legal intervention” per 818 violent crimes committed.

Black: (465.7/.32) =
1 black death by “legal intervention” per 1,455 violent crimes committed.

Dividing the first number by the second gives us the answer to the question, “how many violent crimes did it take to result in one death by ‘legal intervention’ for blacks and non–blacks?” And once again, once we control for the figure that is actually relevant—the number of violent crimes committed, which justifies the number of encounters with police—we find that it is in fact non–blacks who are most likely to have a fatal encounter with police: 1.78 times more likely, in fact—almost twice as likely. In other words, any given black individual who commits a violent crime has a 0.0687% chance of dying in an encounter with police—while any given non–black individual who commits a violent crime has a 0.122% chance of dying in an encounter with police. The non–black risk of death by “legal intervention” is, in other words, an additional 0.0535% larger. [See more] Again, the term “non–black” is used here because “white” and “Hispanic” appear to be lumped together by this data. But for the “white” risk of death by “legal intervention” to be lower than the black risk of death by “legal intervention,” the “Hispanic” risk of death by “legal intervention” would have to be extremely larger than both the white and black risks of death by “legal intervention” in order for the numbers to balance out. Given that almost all statistics conflate “whites” and “Hispanics,” this is an extremely difficult question to resolve, but there are a variety of reasons why it is extremely unlikely (one of these will be discussed below). [See more]

Similarly, extrapolating from data peculiar to New York, Heather MacDonald writes that: “…blacks in New York are less likely than whites to be killed by the police when their higher rates of using mortal force against the police are taken into account. In 2011, for example, New York officers fired at 41 suspects and killed nine of them — an astonishingly low number in light of New York’s population and the size of its police force. … Blacks were 22% of those fatalities; whites were 44% of them. Yet blacks were 67% of all suspects who fired at the police; no white suspect fired at the police. … This pattern holds nationally. The black percentage of suspects killed by the police, historically around 29%, is lower than one would expect based on the best available data on those who represent a mortal threat to the police … In 2013, for example, blacks made up 42% of all cop killers whose race was known, even though they are only 13% of the nation’s population.” Once again, measuring the actually relevant data not only reveals no indication that black suspects are treated unfairly in response to their behavior, but that if anything, this response is less than actual black rates of violence would render statistically justifiable.

Two other conclusions from the aforementioned study, “Trends in Mortality Due to Legal Intervention in the United States,” bear notice: (1) “Legal intervention is an uncommon external cause of death, accounting for roughly 1% to 2% of all homicides.” And: (2) “Absolute numbers of yearly deaths due to legal intervention, as well as rates of death for all age– and race–specific categories examined, decreased significantly from 1979 to 1988 and did not display statistically significant trends thereafter. This decline roughly parallels a concurrent decline in the overall homicide rate during this period.” In other words, as crime goes down, so does the number of criminal suspects who die in encounters with police—this should be obvious, but it puts lie to any impression that instances of “police brutality” are either on the rise (due to proper causes or not), or unrelated to the actions of violent criminals themselves.

One hardly irrelevant contributing factor to this declining homicide rate is, in fact, the incarceration rate—see the 2007 study, From the Asylum to the Prison: Rethinking the Incarceration Revolution from Bernard E. Harcourt, Professor of Law at the University of Chicago. Page three includes the following chart:

And states—“Th[is] relationship between aggregated institutionalization and homicide rates … is remarkable…. Later…, I test and quantify the relationship and find that, … holding constant the leading structural covariates of homicide (poverty, demographic change, and unemployment), the relationship is large, statistically significant, and robust.”

Notice: because most violent crime—especially murder—takes place within, rather than between, racial groups, when black criminals are stopped it is their primarily black victims who primarily gain (this point may be brought up again in a later discussion of New York’s stop and frisk policies). It should be hard to see in principle why “racism” would explain police putting themselves in harm’s way to act for the benefit of violent black criminals’ primarily black victims. Wouldn’t the truly “racist” move be to move somewhere else and ‘avoid the ungrateful bastards’ entirely?

_______ ~.::[༒]::.~ _______

All of this data coincides, as well, with a relatively recent experimental study which tested, for the first time, how quick to press the trigger officers would be on suspects of various racial demographics ‘on the field.’ This study is notable and important for two reasons: first, the officers in this study were told that they were being tested for shooting errors (which meant either shooting unarmed suspects, or failing to shoot armed suspects) and speed—and were given no reason whatsoever to believe they were being tested for racial bias (and as the study authors further note, “there were no racially charged events or news stories in the area at the time.”)

Second, all previous findings have been based on what are known as Implicit–Association Tests. In short, these tests work by showing a screen with something like ‘White’ on the left–hand side and ‘Black’ on the right–hand side, then flashing positive or negative words in the middle and requiring participants to press either a left or right arrow as quickly as possible. The speed at which participants slide positive or negative words to the right or left is supposed to demonstrate the degree to which they subconsciously view whites or blacks in a positive or negative way.

A variety of conceptual and empirical problems plague the question of how well the results of IATs are actually supposed to apply to the real world: first, it is unclear whether IATs actually measure bias held personally by the participant himself—it could be just as likely that in split–second scenarios, forced to apply a word either to one category or the other, the participant is simply demonstrating knowledge of a cultural stereotype rather than any sort of ingrained personal conviction in its truth. I have no doubt, personally, that if I was given a screen with the word “Blonde” on one side and “Brunette” on the other and forced to slide “Dumb” to either the left or the right, I would select “Dumb Blonde” over “Dumb Brunette”—simply because I am aware that the former’s existence. I am extremely skeptical that this would suffice to show that I would unfairly judge the same woman exhibiting the same identical behavior as less intelligent if she were blonde rather than brunette. (Note: even though the stereotypical “dumb blonde” is attractive, and attractiveness is in fact statistically associated with higher IQ, there may nonetheless be an empirical basis behind the “dumb blonde” stereotype just as there is for the stereotype of the dumb athlete, even though athletic capability is statistically associated with higher IQ as well. These questions are more complex than either people who too readily dismiss, or too readily accept, common stereotypes usually realize.)

Next, what if these perceptions are actually justified? I can recognize—as I do—that it is rational to associate men more closely with violent crime than women because men commit a massive majority of violent crimes, even though a majority of men do not commit violent crimes and it therefore makes no sense to expect any given man to be more probably violent than not on the basis of this fact. In other words, I can recognize that it makes sense to associate men with more “violent” words without any implication following that if all else is held equal, I am going to be biased against a man exhibiting identical behavior to a woman in some given scenario—say, that I would expect a man knitting while wearing an egg–white face mask watching Days of Our Lives in a room full of cats to be intrinsically more prone to violence than a woman doing the same, or a female police officer with a mean, orcish face to be less violent than a similar–looking male cop. Likewise, it should be just as possible to associate criminality with African–Americans in exactly the same way without it following that I am biased to see any African–American exhibiting identical behavior in an identical scenario as more inherently prone to violent behavior than a Caucasian (say, expecting the economist Thomas Sowell to be more violent than Thomas Picketty). (See also the second paragraph of the first footnote, which starts: “Criminologists estimate that seventy percent of all crimes are committed by just seven percent of the offenders….”)

Third, even if implicit–attitude tests do measure personally held beliefs and not the mere awareness of stereotypes, and even if these beliefs are irrationally overextended by those who hold them to individuals in irrelevant situations rather than merely believed as true generalizations (which people fully well understand carry exceptions), it is still true that people can very easily overcome these split–second biases—even within the confines of the IAT itself.

Quoting from Alfred Mele and Joshua Shepherd in Situationism and Agency

“Xiaoqing Hu and colleagues (2012) had participants take an IAT and then take it again. On the second trial, they separated participants into four groups. Group 1 simply repeated the IAT to test for the influence of task repetition. Group 2 repeated the incompatible response block of the IAT three times to test for the influence of practice. Group 3 was explicitly instructed to speed up their responses in incompatible response situations. Group 4 was told the same thing as group 3, and they were also given more time to practice; they repeated the incompatible response block three times, just like group 2. … If a conscious intention to speed up responses is to be effective, one would expect group 3 to respond faster than group 1 in the incompatible response conditions. One would also expect group 4 to respond faster than group 2 in the incompatible response conditions. This is what happened (Hu et al. 2012, p. 3, Table 1). Group 3 improved response time by 168 ms (from 902 ms to 734 ms), while group 1 improved response time only by 45 ms (from 950 ms to 905 ms). Compared with group 2, group 4 significantly improved response time as well. Practice certainly seemed to help: group 2 improved response time by 80 ms (from 922 ms to 842 ms). But group 4 improved response time by 215 ms (from 858 ms to 643 ms). … That both a conscious intention and training in speeding up responses had large effects on behavior constitutes important evidence in favor of our optimism. Participants were, in effect, asked to control the influence of implicit attitudes on behavior at a very rapid time scale—less than a second. Participants informed about the influence of implicit attitudes on behavior were able to successfully control the influence of these implicit attitudes. This directly counters the common assumption that implicit attitudes influence behavior in ways not susceptible to conscious control. Knowledge about effects on agents that normally fly under the radar of agents’ consciousness can give people the power to weaken those effects. The fact that relevant knowledge can do this at such rapid time scales is striking, and it speaks against a pessimistic perspective on agential control.”

Thus, the findings of this study from 2012 put to the test, for the first time, the premise that previous tests of the racial biases of police officers merely held as an unquestioned assumption: how does this any of this apply in practice once we actually make our way out into the field? And the results: “In all three experiments using a more externally valid research method than previous studies, we found that participants took longer to shoot Black suspects than White or Hispanic suspects. In addition, where errors were made, participants across experiments were more likely to shoot unarmed White suspects than unarmed Black or Hispanic suspects, and were more likely to fail to shoot armed Black suspects than armed White or Hispanic suspects. In sum, this research found that participants displayed significant bias favoring Black suspects in their decisions to shoot.” The paper also references a 2007 study by Joshua Correll, ‘The influence of stereotypes on decisions to shoot,’ of which it says: “ … unlike civilian participants, the sample of police officers showed no significant racial bias in their errors (they did not mistakenly shoot unarmed Black suspects or fail to shoot armed White suspects disproportionately). Correll and his colleagues suggested that: “by virtue of their training or expertise, officers may exert control over their behavior, possibly overriding the influence of racial stereotypes….””

Where the 2012 study improved over the 2007 study was that the former, for the first time, ran this experiment with an actual weapon, rather than asking participants to press a button that simply says “Shoot” or “Don’t Shoot” in a video game—as the authors write in explanation, “Firing a handgun is a complicated endeavor; at minimum, it involves un–holstering, bringing the weapon to a ready position, aligning sights with the target, and ultimately pulling the trigger. Pushing a button is a simple reflex, dramatically different to the complex process involved in shooting a firearm. Furthermore, there is no active difference between pressing a “shoot” and a “don’t shoot” button. The same action is required for a decision to shoot and a decision not to shoot, whereas in field encounters a decision not to shoot is marked by inaction.” This should very obviously recall the points just made regarding the speed at which participants in IAT experiments are found capable of adjusting and controlling their “biased” responses: the very existence of a handgun, in the real world, immediately weakens any implications that are supposed to extend from these findings by inherently increasing the time for a decision. Adding only the increase of complexity brought out by firing a handgun rather than making a simple button press was enough to change the findings of the experiment even more. How much more would it impact these dynamics to factor in extended interaction with the actual behavior of a real, live suspect?

In any case, the study found that “[active duty police] participants [in experiment 3] took significantly (1.34 s) longer to shoot Black suspects than White suspects … … we calculated that [active duty police] participants were 25 times less likely to shoot unarmed Black suspects than they were to shoot unarmed White suspects …. There was no significant difference between the likelihood of shooting unarmed Hispanic suspects and unarmed White suspects. … [active duty police] participants were equally likely to fail to shoot armed White, Black, and Hispanic suspects at each level of difficulty.” Critically, the hesitation towards Black suspects became greater—not lower—as uncertainty rose in the more difficult experiments: “There was … a significant interaction between suspect race/ethnicity and scenario difficulty; participants were most likely to shoot unarmed White suspects in journeyman [the highest difficulty–level] scenarios.” Not only does this, the best experimental data so far available, converge with the analysis of the actual empirical data presented above; it also adds strong support to the conclusion that the “Hispanic” death–per–crime rate is not illegitimately inflating the high “white” death–per–crime rate presented there—and implies that the explanation for this disparity in the data is that police simply do hesitate more to fire at armed black suspects in particular.

In conclusion: the best empirical evidence and the best experimental data agree. 

_______ ~.::[༒]::.~ _______

If police hesitate more to fire at armed black suspects, why might that be? The authors of the 2014 study quote a 1977 study (which already then found that police shootings of black suspects were proportionate to black suspects’ disproportionate shooting of police officers) giving the obvious answer: “ … police behave more cautiously with Blacks because of departmental policy or public sentiment concerning treatment of Blacks….”

In other words, when a white suspect is shot, the police department doesn’t face the prospect of accusations of racism—so there is quite simply less to worry about. And cases which have been widely covered in the media over the past handful of years demonstrate clearly just how powerful these accusations can be, even when the evidence for “racism” hangs on demonstrably slender threads. In more than one incident, an individual who merely defended themselves against assault’s entire life changed in such a way as to render any return to a normal life impossible due to death threats (and worse) because the suspect who assaulted them was black, and accusations of racism therefore entered the picture and permanently clouded all future evaluation of the actual evidence or facts of the case.

These instances show very clearly that the facts bear very little weight whatsoever once accusations of racism are made. Meanwhile, a variety of cases of at least seeming acts of police brutality against white victims went almost wholly ignored, except in small pockets of conservative media where they were used instrumentally as counter–points to the prevailing racial narrative—and even here, they weren’t being investigated out of any genuinely intrinsic interest.

I’ll restrict myself for now for the sake of brevity to a brief discussion of the facts in the case involving the eighteen–year–old Michael Brown and Officer Darren Wilson which sparked the 2014 riots in Ferguson, Missouri and elsewhere. The original narrative would have it that events took place in a way that resembled something like this: Wilson was a police officer going about an ordinary day just like any other when he was suddenly so incensed to see a well–educated, upstanding black man walking the street that he dove out of his vehicle in rage—at which point Michael Brown dropped to his knees and held his hands in the air in surrender—at which point Wilson, unmoved by the display, proceeded to shoot Brown in the back until he collapsed, and then continued firing rounds into the lifeless corpse solely to vent his unrelenting and baseless hatred of upstanding, college–bound African–Americans.

Reality couldn’t have been more different.

The incident started when Brown stole from a local convenience shop—cameras caught him strong–arming the clerk:

Next, Darren Wilson encountered Michael Brown jaywalking in the middle of a street, and stopped to ask him to move to the sidewalk. Forensic evidence confirmed that Brown’s response to this was to attack Wilson by reaching through the car window and trying to grab Wilson’s firearm: a shot was fired inside the vehicle, confirming Wilson’s account of a struggle; dust on Brown’s right hand confirmed that his hand had been within very close range of the shot (which could not have taken place later); Brown’s DNA was found inside the vehicle. Once this attempt was unsuccessful, Brown fled and Wilson pursued (as this was now, after all, a case of assault, if not attempted murder). Finally,

When Wilson arrived within range of Brown and ordered him to freeze, by Wilson’s account as well as that of the most credible witnesses, Brown paused, made some sort of gesture, and then charged in something like a football tackle pose back towards Wilson’s direction. The U.S. Department of Justice’s report notes that “Brown’s blood in the roadway [as well as the pattern of shell casings] demonstrates that Brown came forward at least 21.6 feet from the time he turned around toward Wilson”—once again, the forensic evidence conclusively supported Wilson’s account. Wilson wasn’t pursuing Brown when the fatal rounds were shot—he was strafing away in defense.

The witnesses who had claimed anything otherwise were all resoundingly discredited. Every witness whose statements were compatible with the irrefutable forensic evidence corroborated Wilson’s account of events. As the DOJ report concluded, “While credible witnesses gave varying accounts of exactly what Brown was doing with his hands as he moved toward Wilson … they all establish that Brown was moving toward Wilson when Wilson shot him. Although some witnesses state that Brown held his hands up at shoulder level with his palms facing outward for a brief moment, these same witnesses describe Brown then dropping his hands and “charging” at Wilson.”

_______ ~.::[༒]::.~ _______

Meanwhile, at least (for sake of brevity) one significant case in which a white suspect was killed under questionable circumstances—by a black cop, no less!—received effectively no attention whatsoever: Just two days after the shooting of Michael Brown, a black police officer was cleared of wrongdoing after shooting Dillon Taylor within mere seconds of a rushed encounter, when the officer suspected Taylor was reaching for a weapon when the latter moved his hands towards his waistband most likely for the simple purpose of pulling his pants up. Taylor later turned out to be unarmed, and a body camera captured the entire event on film. No national outrage followed. No riots took place. No buildings were burned. There wasn’t a single case where any black citizens were randomly attacked in retaliation or black protesters who were sympathetic to the white victim were lured into attacks by groups of whites or smashed with hammers while wearing ‘Stop Killing White People’ t–shirts for suggesting not to destroy unrelated businesses.

Unlike the case of Michael Brown, Dillon Taylor hadn’t robbed any stores or strong–armed any clerks, and he neither charged in the officer’s direction nor attempted to take his weapon away from him. In the very similar case of Tamir Rice, officers were called to the scene in response to reports that a child was walking around carrying and pointing what looked to all outside appearances to be a real weapon—and the officer who arrived at the scene fired hastily when Rice very clearly reached for that perfectly visible and obvious object. Unlike in the case of Tamir Rice, Dillon Taylor was not carrying even a replica of a weapon—and unlike the case of Tamir Rice, there isn’t even a Wikipedia entry I can link to here for further details behind the case of Dillon Taylor. The “Justice for Dillon Taylor” Facebook page has around 5,000 followers. The “Justice for Tamir Rice” page has around 8,000. The “Justice for Michael Brown” page has almost 30,000. Only one member of this group was proven by overwhelming forensic evidence to have attacked their killer fist—possibly in an attempt at murder—without provocation after committing an aggressive crime.

Yet, when the naive version of the story of the Michael Brown shooting was finally refuted once and for all, the attitude of many protesters was represented by this statement to one reporter: “Even if you don’t find that it’s true, it’s a valid rallying cry … It’s just a metaphor.” Such is the nature of recent national incidents in which race was claimed to play a major role: it simply doesn’t matter, individually, whether or not any particular claim is actually true.

But the truth is that there is more, and not less, outrage when a black suspect is victimized—not only when comparing situations which are similar (Tamir Rice vs. Dillon Taylor), but even when comparing cases where the black suspect is in fact a violent aggressor and the white suspect is not (Michael Brown vs. Dillon Taylor). In turn, this outrage leads to increased public awareness when a black suspect is victimized, and relative public ignorance when a white suspect is the victim. Disproportionate outrage towards the victimization of black suspects is fueled by the perception that black suspects are disproportionately victimized by police. The perception that black suspects are disproportionately victimized is created through nothing other than the very existence of that same disproportionate outrage.

The perception of racism in policing is like ourobouros,
fueling its disproportionate outrage by consuming
the tail of its own disproportionate outrage.

_______ ~.::[༒]::.~ _______

[1] To be clear, the organization behind this report is American Rennaisance. I rely on it here solely because it is one of the few sources of discussion of the racial breakdown in victim reports I was able to find, and to reject this piece of data because of anything else would be to throw the baby out with the bathwater. Notably, in his rebuttal to it, even Tim Wise says nothing about the report’s discussion of victim reports—he only attacks further extrapolations from the data derived from them, and I require none of these other points for my much more limited purposes here.

While Wise’s response has further problems of its own and I fully endorse neither the original report nor Tim Wise’s critique of it, Wise does rightly note that: “Criminologists estimate that seventy percent of all crimes are committed by just seven percent of the offenders: a small bunch of repeat offenders who commit the vast majority of crimes. Since blacks committed roughly 1.2 million violent crimes in 2002, if seventy percent of these were committed by seven percent of the black offenders, this would mean that at most there were perhaps 390,000 individual black offenders that year. In a population of 29.3 million over the age of twelve, this would represent no more than 1.3 percent of the black population that committed a violent crime in 2002. [If blacks committed 1.2 million violent crimes in 2002, and 70 percent of these were committed by 7 percent of the offenders, then 30 percent were committed by the remaining 93 percent of offenders. 30 percent of 1.2 million offenses is 360,000 offenses. 360,000 represents 93 percent of 387,000. If the remaining 70 percent of offenses (840,000) were committed by 7 percent of the population, this means that these crimes were committed by 27,000 hardcore offenders (7 percent of 387,000)].” This point is entirely valid—and also irrelevant to anything I have argued or need for the purposes of my argument here.  As I wrote previously, “I can recognize—as I do—that it is rational to associate men more closely with violent crime than women because men commit a massive majority of violent crimes, even though a majority of men do not commit violent crimes and it therefore makes no sense to expect any given man to be more probably violent than not on the basis of this fact.”