Consciousness (XIII) — The Epistemology of Death

_______ ~.::[༒]::.~ _______

Part 1.

The first fact about the near death experience worth considering is that the very existence of the experience is already incredibly unlikely, on the assumption that the physical structure of the brain “is,” or “produces,” the subjective experiences of the mind—that is, in other words, the very existence of the near death experience provides evidence against the very assumption used to rule out the possibility that near death experiences could represent something “real.” Forget the fact that these are “near death” experiences—the most basic and fundamental reason to find the near death experience intriguing is quite simply that it should be surprising to the materialist that it happens at all. The sheer fact that experiences of this type are even capable of happening at the time at which they occur, period, itself provides reasonable probabilistic evidence against the hypothesis—which throughout this series I have adamantly contended is (1) a philosophical hypothesis to begin with, not a scientific one; (2) not clearly rendered more probably true by any particular scientific facts; and (3) opposed by entirely plausible, strong philosophical arguments standing against it; and yet strikingly lacking support, giving the fervency with which belief in it is so often held, by any particular strong philosophical arguments in its defense—that first–person subjective, qualitative experience is produced by the otherwise blind motion of inert physical structures (in a brain or otherwise).

People who undergo Near Death Experiences describe them as feeling “more real than real.” And experiments confirm that memories of Near Death Experiences are indeed more vivid than memories of truly experienced events, and that recall of them looks nothing like recall of imagined memories when compared to them in brain scans. Quite plainly, if subjective conscious experiences are without exception either the product of, or identical to physical brain activity, then we should expect the subjective intensity of experience to correlate directly with the objective intensity of brain activity. Yet, in the Near Death Experience, this is categorically the opposite of what we actually see. “Cooper and Ring noted that [in ordinary waking life] a hallucination is accompanied by heightened brain activity. But their studies produced data showing that NDEs happened more often when neurophysiological activity was reduced, not increased. Sabom also found that NDEs were more likely when the person was unconscious for longer than 30 minutes; Ring found that the closer people were to physical death, the more extensive the NDE.” [1]  And other research continues to confirm that NDEs tend to be deeper—even with more reports of “enhanced cognitive powers” (such as the “enhanced powers” of memory recall during the “life review”), no less—the closer the subject is to death.

As Sam Parnia and Peter Fenwick write, “The occurrence of lucid, well–structured thought processes together with reasoning, attention and memory recall of specific events during cardiac arrest (NDE) raise a number of interesting and perplexing questions regarding how such experiences could arise. These experiences appear to be occurring at a time when cerebral function can be described at best as severely impaired, and at worst absent.” Bruce Greyson concurs: ”The paradoxical occurrence of heightened, lucid awareness and logical thought processes during a period of impaired cerebral perfusion raises particularly perplexing questions for our current understanding of consciousness and its relation to brain function. A clear sensorium and complex perceptual processes during a period of apparent clinical death challenge the concept that consciousness is localized exclusively in the brain.” That even the skeptics recognize that this is true is supported by the observation that one of the most common skeptical approaches is to argue that the near death experience actually doesn’t happen during clinical death, but is reconstructed at some other time (however implausible this suggestion may be—for reasons we will see, as well as one we already have: recall of memories of the near death experience look nothing like recall of imagined memories, and these memories consistently contain more details than memories of either real or imagined events).

_______ ~.::[༒]::.~ _______


The usual skeptical approach to the near death experience is to outline physical factors that can produce experiences with vague similarities to certain aspects of the near death experience. The effects of a sufficient dose of DMT can be similar to the typical NDE, for example—so perhaps the brain releases DMT as it approaches death. Depriving the brain of oxygen can loosely replicate some features as well, as can electrical stimulation applied to the temporal lobe.

The problem however, is twofold: first, no particular one of these features comes anywhere close to being able to do more than capture vague resemblance to a small handful of the core characteristics of the near death experience; and second, near death experiences seem to be capable of happening in an extremely wide range of physical circumstances, so that any particular physical element which might be proposed to play a role in producing the experience therefore appears to necessarily be entirely lacking in some significant percentage of cases.

Most fundamentally, any attempt to explain the near death experience through physiological features will be undermined by the fact that NDEs can occur simply because death appears to be imminent, without the subject’s being physically near death at all—and when NDEs occur in these circumstances, they carry all the prototypical features of NDEs that occur when a subject is actually physically close to death. And yet, researcher P. M. H. Atwater, pediatrician Melvin Morse, ICU nurse Penny Sartori amongst many others have all documented the fact that NDEs happen in children under the age of five, and even in children as young as six months old—and in all cases, they carry all the same basic features as they do when they occur to adults. (For more on reports from children’s near death experiences, see: Bush, 1983; Gabbard & Twemlow, 1984; Herzog & Herrin, 1985; M. Morse, 1983, 1994; M. Morse, Conner, & Tyler, 1985; M. Morse et al., 1986; Serdahely, 1990).

Dr. Jeffrey Long says of his own studies on near death experiences in young children: “ … their average age was 3–1/2 years old. These are children so young that to them, death is an abstraction. They don’t understand it. They can’t conceptualize it. They’ve almost never heard about near–death experiences; have no preconceived notions about that. They certainly have far less cultural influence, both in terms of religion or anything else that could even potentially modify the near-death experience at that tender young age. And yet looking at these same 33 elements of near–death experience that I did in other parts of this study, I found absolutely no statistical difference in their percentage of occurrence in very young children as compared to older children and adults.” (On a related note, NDEs also occur to people who are struck by death—say, through sudden cardiac arrest, or being struck with a vehicle they hadn’t realized was approaching them—too quickly to have any concept of what is happening.) Facts like these would seem to render a physiological account a more plausible way of dismissively explaining the NDE than a psychological account. But yet, once again, any physiological feature which might bear some relation to the NDE will be missing from many accounts—and some accounts will lack all of them. (Similarly, while positive experiences might theoretically be explained by things like wish–fulfillment, there are both “hellish” experiences and people who simply experience the ordinary phenomenology of the near death experience as hellishly terrifying in its own right.)

Capacity to experience the NDE is not limited by personality type. In The Handbook of Near Death Experiences (2009), Bruce Greyson and Janice Holden conclude a survey of the evidence for personality–type factors in NDEs: ”[R]esearch has not yet revealed a [personality] characteristic that either guarantees or prohibits the occurrence, incidence, nature or after–effects of a near death experience.” People who have had NDEs do not differ from those who do not in terms of “‘sociodemographic variables, social support, quality of life, acceptance of their illness, [or] cognitive function (as assessed using a standard instrument, the Mini–Mental State Exam)….” And there is no correlation with prior religion or religiosity, even though “a significant correlation was found between the depth of the NDE and a subsequent increase both in the importance of religion and in religious activity.” Any psychological explanation of the NDE must face the fact that the core structure of the near death experiences is as consistent as it is despite its occurrence not being related in any way so far identified to the subject’s prior expectation.

_______ ~.::[༒]::.~ _______


Blood and cerebral levels of oxygen and other gases like carbon dioxide play a major role in skeptical counter–explanations of the NDE. But reduction of oxygen levels to the brain produces confusion, and leads to impairment in memory formation (see also)—yet, as already mentioned, near death experiences are almost always experienced vividly and remembered with striking clarity. Dr. Sam Parnia notes that people whose oxygen levels fall “become agitated and acutely confused … [and] develop “clouding of consciousness” together with highly confused thought processes with little or no memory recall. … those who have NDEs have an excellent memory of the experience, which often stays with them for decades. … [they experience] the complete opposite of an acute confusional state.” Furthermore, “patients with low oxygen levels don’t report seeing a light, a tunnel, or any of the typical features of an NDE … this experience has never been reported by any other doctor or scientific study as a feature of a lack of oxygen.” Blood levels of both oxygen and carbon dioxide have been measured in NDE patients, and sometimes maintained by heart–lung machines—so we have good reason to believe NDEs have occurred in patients without abnormal levels. Although blood levels of carbon dioxide may not accurately reflect levels present in the brain, and so it is possible that this hasn’t ruled out a role for carbon dioxide; “raised carbon dioxide was an extremely common problem in clinical practice, [but] we hardly ever saw anyone have an NDE–type event. Also, there [have] been many studies … on the effects of increased carbon dioxide and these [have] not shown that it [leads] to NDE–like states.”

More importantly, the authors of a review in Frontiers of Human Neuroscience write: “In a sudden severe acute brain damage event such as cardiac arrest, there is no time for an experience of tunnel vision from retinal dysfunction, given that the brain is notably much more sensitive to anoxia and ischemia than peripheral organs … Fainting due to arterial hypotension—a common event—does not seem to be associated with the tunnel visions described in NDEs. … NDEs are not reported by patients using opioids for severe pain, while their cerebral adverse effects display an entirely different phenomenology in comparison to NDEs (Mercadante et al., 2004; Vella-Brincat and Macleod, 2007). Morse also found that NDE occurrence in children is independent from drug administration, including opioids (Morse et al., 1986). … Evidence against simple mechanistic interpretations comes also from a well-known prospective study by van Lommel et al. (2001), which showed no influence of given medication even in patients who were in coma for weeks. Factors such as duration of cardiac arrest (the degree of anoxia), duration of unconsciousness, intubation, induced cardiac arrest, and the administered medication were found to be irrelevant in the occurrence of NDEs. Also, psychological factors did not affect the occurrence of the phenomenon: for instance, fear of death, prior knowledge of NDE, and religion were all found to be irrelevant.”

Quoting from page 376 of Irreducible Mind: “Experiences often differ sharply from the individual’s prior religious or personal beliefs and expectations about death (Abramovitch, 1988; Ring, 1984). People who had no prior knowledge about NDEs describe the same kinds of experiences and features as do people more familiar with the phenomenon (Greyson, 1991; Greyson & Stevenson, 1980; Ring, 1980; Sabom, 1982). … If NDEs are significantly shaped by cultural expectations, we might expect that experiences occurring after 1975, when Moody’s first book made NDEs such a well–known phenomenon, would conform more closely to Moody’s “model” than those that occurred before that date. This does not appear to be the case (Long & Long, 2003). Similarly, a study of 24 experiences in our collection that not only occurred but were reported before 1975 found no significant differences in the features reported, when compared to a matched sample of cases occurring after 1984, except that fewer “tunnel” experiences were reported in the pre–I975 group (Athappilly, Greyson, & Stevenson, 2006).”

However, despite the fact that fear of death and religion play no predictive role in whether or not someone will have an NDE, clear differences remain for years after the brush with death between those who have had them. Writing in the 2011 book Neuroscience, Consciousness, and SpiritualityPim Van Lommel says that: “ … the infrequently noted fear of death does not affect the occurrence of a NDE either, … whether or not patients had heard or read anything about NDE in the past made no difference … [And] any kind of religious belief, or indeed its absence in non–religious people or atheists, was irrelevant ….” Yet, “Among the 74 patients who consented to be interviewed after 2 years, 13 of the total of 34 factors listed in the questionnaire turned out to be significantly different for people with or without an NDE. The second interviews showed that in people with NDE fear of death in particular had significantly decreased while belief in an afterlife had significantly increased. … [And] after 8 years … clear differences remained between people with and without NDE, … In particular, they were [still] less afraid of death and had a stronger belief in an afterlife.”

Temporal lobe seizures have been proposed to play a role on the basis that temporal lobe epileptic episodes sometimes have some superficial similarities with the NDE, but once again—temporal lobe seizures are associated with dramatic memory loss. Automatisms don’t occur in association with near death experiences, either. As neuroscientist Mario Beauregard writes, “Review of the literature on epilepsy …indicates that the classical features of NDEs are not associated with epileptic seizures located in the temporal lobes … [and] the experiences reported by participants in Persinger’s [transcranial magnetic stimulation] studies bear little resemblance with the typical features of NDEs.” The authors of Irreducible Mind: Towards a Psychology for the 21st Century write (p.396): “[The] neurosurgeon Wilder Penfield … is widely reported as having produced … NDE–like phenomena in the course of stimulating various points in the exposed brains of awake epileptic patients being prepared for surgery. Only two out of his 1132 patients, however, reported anything that might be said to resemble an OBE: One patient said: ‘Oh God! I am leaving my body.’ Another patient said only: ‘I have a queer sensation as if I am not here… As though I were half here and half there.’ In later studies at the Montréal Neurological Institute…, only one of 29 patients with temporal lobe epilepsy reported “a ‘floating sensation’ which the patient likened at one time to the excitement felt when watching a football game and at another time to a startle” (Gloor et al., 1982, pages 131–132). Such experiences hardly qualify as phenomenologically equivalent to OBE.”

The authors of the earlier Frontiers review conclude: “Anesthesia can suppress consciousness by simply interrupting binding and integration between local brain areas without the need for suppressing EEG activity (Alkire and Miller, 2005; Alkire et al., 2008). This is the reason why, in clinical practice, general anesthesia can be associated with almost normal EEG with peak activity in the alpha band (Facco et al., 1992), while in deep, irreversible coma, consciousness can be lost even with a preserved alpha pattern activity (Facco, 1999; Kaplan et al., 1999). In short, loss of consciousness can occur with preserved EEG activity, while, in the case of a flat EEG, neither cortical activity nor binding can occur; furthermore, short latency somatosensory–evoked potentials, which explore the conduction through brain stem up to the sensory cortex and are more resistant to ischemia than EEG, have been reported to disappear during cardiac arrest (Yang et al., 1997). The whole of these data clearly disproves any speculation about residual undetected brain activity as a cause for some conscious experience during cardiac arrest.”

Bruce Greyson concurs:  “In our collection at the University of Virginia, 22% of our NDE cases occurred under anesthesia,  and they include the same features as other NDEs, … functional imaging studies that have looked at blood flow, glucose metabolism, or other indicators of cerebral activity under general anesthesia (Alkire, 1998; Alkire et al., 2000; Shulman et al., 2003; White & Alkire, 2003) … [confirm that] brain areas essential to the global workspace are consistently greatly reduced in activity individually and may be decoupled functionally, thereby providing considerable evidence against the possibility that the anesthetized brain could produce clear thinking, perception, or memory. …  [And] the situation is even more dramatic with regard to NDEs occurring during cardiac arrest … In cardiac arrest, even neuronal action–potentials, the ultimate physical basis for coordination of neural activity between widely separated brain regions, are rapidly abolished (Kelly et al., 2007). Moreover, cells in the hippocampus, the region thought to be essential for memory formation, are especially vulnerable to the effects of anoxia (Vriens et al., 1996). In short, it is not credible to suppose that NDEs occurring under conditions of general anesthesia, let alone cardiac arrest, can be accounted for in terms of some hypothetical residual capacity of the brain to process and store complex information under those conditions.”

Finally, Van Lommel (in Neuroscience, Consciousness, and Spirituality): “Through many studies with induced cardiac arrest in both human and animal models cerebral function has been shown to be severely compromised during cardiac arrest, with complete cessation of cerebral blood flow (Gopalan et al. 1999), causing sudden loss of consciousness and of all body reflexes, but also with the abolition of brain–stem activity with the loss of the gag reflex and of the corneal reflex, and fixed and dilated pupils are clinical findings in those patients. And also the function of the respiratory centre, located close to the brainstem, fails, resulting in apnoea (no breathing). The electrical activity in the cerebral cortex (but also in the deeper structures of the brain in animal studies) has been shown to be absent after 10–20 s (a flat-line EEG) (De Vries et al. 1998; Clute and Levy 1990; Losasso et al. 1992; Parnia and Fenwick 2002). … Moreover, although measurable EEG–activity in the brain can be recorded during deep sleep (no–REM phase) or during general anesthesia, no consciousness is experienced because there is no integration of information and no communication between the different neural networks (Massimini et al. 2005; Alkire and Miller 2005; Alkire et al. 2008). So even in circumstances where brain activity can be measured sometimes no consciousness is experienced. A functioning system for communication between neural networks with integration of information is essential for experiencing consciousness, and this does not occur during deep sleep or general anesthesia, let alone during cardiac arrest.

_______ ~.::[༒]::.~ _______


A 2013 study on death from cardiac arrest in rats was supposed to be interpreted by some skeptics as casting doubt on this when it found that EEG measurements recorded gamma waves (the highest possible frequency) in the brains of rats dying of induced cardiac arrest. This was particularly compelling because, since the late 80’s, it has been proposed that the synchronized firing of neurons in the gamma range could be responsible for how subjective experience becomes “bound”—that is, how experience unifies multiple modes of sensory input in one unitary stream of experience, despite the fact that these processes are spread out in the brain without ever meeting together at any central point that might theoretically represent ‘the place’ in the brain from which we ‘see’ all of these inputs ‘together’. However, more recent studies confirm that gamma waves are not, in fact, direct correlates of conscious perception—“most [previous] studies manipulated conscious perception by altering the amount of sensory evidence, [so] it is possible that they reflect prerequisites or consequences of consciousness rather than the actual [neural correlate of it]. Here we directly address this issue … [and results contradict] the proposal that local gamma band responses in the higher–order visual cortex reflect conscious perception.” Other research shows that gamma waves measured by EEG can represent nothing more than “miniature saccades [eye motions] instead of cognitive or neuronal processes.” (A further review of that data can be found here).

Sam Parnia notes that “After blood flow to the brain is stopped, there is an influx of calcium inside brain cells that eventually leads to cell damage and death … That would lead to measurable electroencephalography (EEG) activity, which could be what is being measured.” Other previous research already existed to confirm his suspicion, noting that EEG waves after decapitation, for example, can be “caused by membrane potential oscillations that occur after the cessation of activity of the sodium–potassium pumps has lead to an excess of extracellular potassium. … this sudden depolarization leads to a wave in the EEG.” Another review explains: “The term spreading depolarization describes a wave in the gray matter of the central nervous system characterized by swelling of neurons, distortion of dendritic spines, a large change of the slow electrical potential and silencing of brain electrical activity (spreading depression) … Spreading depolarization is induced experimentally by various noxious conditions including chemicals such as potassium….” And the rats were, in fact, killed by an “intracardiac injection of potassium chloride.” Converging lines of evidence suggest that it is entirely probable that no subjective experiences were associated with these EEG waves at all; and in any case, gamma waves have never been measured in any human subjects (much less who weren’t injected with potassium chloride) in relation to any near death experience. This was yet another case of unfounded media hype, where anything that even remotely seems to support the reductionist case gets easy publicity (to be fair, poorly reasoned points that can be sensationalized tend to get easy publicity in general—but only in the case of claims interpreted as supporting reductionism do so many otherwise intelligent people get so easily suckered in).

As neuropsychiatrist Peter Fenwick and his wife Elizabeth write in a book reviewing more than 300 near death experiences, “While you may be able to find [skeptical explanations] for bits of the Near-Death Experience, I can’t find any explanation which covers the whole thing. You have to account for it as a package and skeptics … simply don’t do that. … They vastly underestimate the extent to which Near–Death Experiences are not just a set of random things happening, but a highly organized and detailed affair.” In short, for every single proposal for any particular physiological basis for the near death experience remains it extremely speculative to suppose that it actually does play any definite role. Substantial problems and difficulties face each individual suggestion; the skeptic skirts this  by supposing we can simply combine any number of such factors ad lib to arrive at the NDE’s phenomenology. Of course, the skeptic can always say that there is no special burden to provide a specific justified explanation of the NDE, that any number of variables in any combination could conceivably be triggering the NDE in different circumstances, and that an explanation of this sort should stand as the epistemic default unless it can be categorically disproven by the realist. The playing field, on this approach, isn’t equal (and it renders skeptical counter–hypothesis unfalsifiable for the foreseeable future): the fact that we can’t positively rule out x is supposed to make it unreasonable to believe y; but if we can’t positively rule out y, this isn’t supposed to make it unreasonable to believe x. But why should this be the case?

This could only be asserted because of an assumption that presuming subjective conscious experience to be nothing more than the epiphenomenal byproduct of physical brain activity is an epistemic default due to “parsimony” in the first place—yet it is just exactly this position which I have argued is not just epistemically unjustified given that nobody has a damned clue how blind physical processes could possibly “produce” subjective first–person experience (and such mechanisms, whatever they are, may hardly be “parsimonious”); it is falsified by the fact that it would entail that we could neither think nor talk about consciousness–per–se (and despite first appearances, panpsychism doesn’t solve the problem, either). Thus, my interest is not in the question of whether a “realist” interpretation of the NDE can be definitely demonstrated to be undoubtably true on purely neutral philosophical grounds. 

No skeptical counter–hypothesis can be definitively demonstrated to be anywhere near undoubtably true, either; and the skeptic hardly proceeds from purely neutral philosophical grounds. Indeed, that he does not do so is probably the single most important point to take away from all this: skeptical hypotheses towards the NDE are not believed because of how compelling the independent evidence is in their favor; these hypotheses are believed because of the insistence, born of an a priori conviction in the truth of materialism, that some explanation of the sort simply must be true because materialism in general is. And yet, if anything could possibly count as evidence against materialism, it would be evidence like this—which is dismissed by the materialist because it isn’t compatible with materialism.

My own interest is in what one can reasonably believe. And having argued in detail that one can more than reasonably believe that consciousness is not reducible in principle to physical mechanism (but is, instead, a “bedrock” phenomena in the world all in its own right), my conclusion extends to entail that one can reasonably believe that the near death experience could very well be just what it appears to be: an experience of the separation of consciousness from the body and brain. To the extent that there is simply no compelling justification (beyond prejudice) for confidence in the philosophical idea that qualitative, subjective experience is wholly and completely reducible to physical mechanism in the first place, there is no compelling justification (beyond prejudice) for confidence that any particular reductionist explanation of the near death experience is especially likely to be true. Any insistence otherwise plainly rests not on the independent plausibility of these reductionist explanations, but instead in the a priori conviction borne solely from philosophical prejudice that some reductionist explanation must be true—with this a priori conviction in place, the fact that it is conceivable that the patient near death has some residual brain activity we can’t currently measure, or that it can’t be definitively refuted that some complex combination of factors, none of which independently come anywhere near explaining the whole experience, and each of which seem entirely lacking in at least some large number of cases, could combine in any number of ways (and no matter how combined still produce the archetypical NDE) is—for the skeptic—enough. But for those of us who reject the claim that there is sufficient justification for such confidence in this a priori conviction in the first place, it isn’t.

Of course, much of this discussion of underlying neurophysiological correlates of the near death experience rather misses the point—for even interpreting them so that these would be evidence against the reality of the experience itself merely presupposes the philosophical position in which subjective experience is solely ‘produced by’ the physical activity of the brain. What precisely do we think we’re disproving if we identify the causes of onset of a near death experience? It simply wouldn’t follow from the fact that the trigger of the event is physical that the entire experience is purely physical, any more than it follows from the fact that the trigger of a note sounding out of a piano is the motion of a hand against a key that the entire experience of sound is composed of nothing but hands and keys. A balloon is a separate ‘thing’ from the string tying it to the ground, but the balloon still can’t float away unless the string which holds it is cut—and it doesn’t follow from this that the event of a balloon floating into the air is just nothing other than an event produced by strings whenever, in general, they are cut. Nor would correlations between how high in the air the balloon has risen and how far towards the ground the string has fallen in the milliseconds proceeding the cut prove that the state of the former was a direct function of the latter—even though such correlations will always be found.

Supposing the near death experience did involve perception of something real as a result of consciousness dissociating from the body, surely the mind–body connection is such that it is in response to actual death that consciousness dissociates from the dying brain—and surely there should be some combination of physical events which can be identified as the most proximate correlates of “death.” Hence, in order to sufficiently “debunk” the reality of the near death experience, the skeptic cannot just identify what physiological event corresponds with “death”, the point at which the experience occurs. Not even this goal has actually been empirically met—yet even if it ever should be, more would still be needed to establish that this was in fact anything more than the identification of the trigger which causes consciousness to separate from the brain and undergo the near death experience. Any confident dismissal of the reality of the near death experience based on less than this is, once again, simply unjustified philosophical prejudice—unless and until some compelling general proof of materialism as a whole is put forward.

_______ ~.::[༒]::.~ _______


I’ve argued already that the very existence of a near death experience is surprising on the assumption that subjective conscious experience is either identical to, or a secondary, epiphenomenal byproduct “produced” by, the objectively measurable physical activity of the brain—but the nature of the experience itself is remarkable, too. Consider the effects of psychoactive drugs, delirium, and other “hallucinatory”–type states: the subjective effects of psychedelic drugs like DMT, and Ketamine vary tremendously between experiences. Some DMT or Ketamine experiences can resemble the near death experience in certain features, but there is remarkably little consistency between any two or three experiences with one of these drugs. DMT users encounter everything from “self–transforming machine elves” to “ a multi–eyed, multi–serpent” to “an alien wasp” to “dolls in 1890s outfits, life–sized … women  in corsets … red circles painted on their cheeks …  big breasts and big butts and teeny skinny waists …all whirling around me on tiptoes. The men had top hats, riding on two–seater bicycles.” On Ketamine, John Lilly encountered “[the aliens] who manage Earth Coincidence Control, your local branch of Cosmic Coincidence Control.” Others watch “every other entity within this realm begin to connect to one another, to become one…” or see “one face … that seemed very large and its features were constantly distorting themselves … [it] screamed, at such a volume that is not possible for any earthly speaker….”

There are a handful of variations across cultures in how the details of various stages of the near death experience are ‘filled in’: in the West, NDErs are usually “sent back” while being told that they must return to life to finish carrying out their ‘purpose,’ whereas a number of Indian accounts apparently involve the subject being told there was a bureaucratic mistake and that they aren’t the person whose death was expected. But this is as dramatic as the variations between various near death experiences get—and other than that, the core features are remarkably consistent across different times and places (and even here, they still fit the form of the subject being “sent back” within the vision prior to the experience of actually returning to their bodies). Why, if nothing produces the NDE besides a coincidence of converging chemicals, do they not become as varied as experiences with drugs like DMT or Ketamine? Why does no one ever find themselves at a circus watching dancing marionettes, talking to “multi–eyed serpents” or “alien wasps” or “self–transforming machine elves,” or getting screamed at by enormous distorting faces? This comparison is hardly irrelevant given that Ketamine (or a hypothetical Ketamine–like endogenous substance as yet identified) and DMT (which is in fact produced in some amounts endogenously within the brain) have both been proposed seriously by skeptics to play a direct role in producing near death experiences.

In light of facts like these, the striking similarities between near death experiences deserve explanation just as much as any dissimilarities do. Dr. Jeffrey Long notes that “The percentage of time that people encounter deceased relatives is extremely high. It was actually 96% in the NDERF study … [and] that’s actually corroborated by another major scholarly study … The important thing is that any other experience of altered consciousness that we experience on earth, dreams, hallucinations, drug experiences, you name it; all of these other types of experiences of altered consciousness, … You’re going to remember the banker that you did business with that day or your family member you said hi to as you were walking into the house. This is what’s in the forefront of consciousness.” It is intriguing, in this vein, to note that the “dreamlets” produced in fighter pilots during periods of unconsciousness induced by loss of cerebral oxygen through rapid acceleration in a centrifuge studied in 1997 by Dr. James Whinnery “frequently included living people, but never deceased people….” Would so few people rendered unconscious by rapid acceleration ever believe in the heat of the confusion that they had died? Would the same “expectations” proposed to explain the near death experience (despite the fact that fear of death, religion, and degree of religiosity have been found to have no predictive power over who will have an NDE) not show up here? (For that matter, it is striking that even amongst the incredibly intense variety of experiences reported by users of DMT, I have never heard a single one which actually ever paralleled the stages of the “real” near death experience directly. For all the interaction with ‘alien intelligences,’ for one thing, I’ve not once heard a single report of anyone apparently encountering a deceased relative.)

Once again, there is a compelling convergence of evidence: “[P]eople close to death are more likely to perceive deceased persons than do healthy people, who, when they have waking hallucinations, are more likely to perceive living persons (Osis & Haraldsson, 1977/1997). NDErs whose medical records show that they really were close to death also were more likely to perceive deceased persons than experiencers who were ill but not close to death, even though many of the latter thought they were dying (E. W. Kelly, 2001). … in one–third of the cases the deceased person was either someone with whom the experiencer had a distant or even poor relationship or someone whom the experiencer had never met, such as a relative who died long before the experiencer’s birth (E. W. Kelly, 2001).”

If the near death experience simply results from the lucky, surprising convergence of simultaneous chemical coincidences, then correlations like these—and the consistency of the form of the experience in general—is an absolutely astounding, unbelievable coincidence. Not only are we expected to believe that the experiencing subject enters a state of profoundly heightened awareness precisely when his brain activity becomes the most suppressed, and that the consistency of the form of the near death experience is always produced by this complex cocktail of factors despite the fact that it can occur in the apparent total absence of any of them and still retain the essence of exactly the same form, with no one ever reporting the disorganized or chaotic imagery of meeting DMT “machine elves” or the President of the United States or giant, distorted screaming faces or an environment like Blade Runner or the alien managers of “Earth Coincidence Control” or the planet Gallifrey after some particular factor changes, but correlations between the depth of the near death experience—even down to details such as how likely deceased persons were “encountered”—and the actual proximity to death exist by sheer coincidence. At some point, it just isn’t clear anymore whether the reductionist explanation really would even be more “parsimonious” supposing we could somehow start out with perfectly neutral philosophical presuppositions. The skeptic is left in the position of having to defend an increasingly wide range of utterly ad hoc theoretical factors which are supposed to mix and match ad lib to produce the experience and yet, no matter how they vary or even lack some of these factors entirely, still produce almost exactly the same core experience every time (at least so long as drugs are not involved). This is quite simply a tremendous far cry from anyone actually having anything like a justified reductionist account of the NDE.

Admitting the possibility that the NDE could be just what it appears to those who experience it to be—that consciousness simply can have experiences while separate from the brain—is not less “parsimonious” than any possible materialist explanation of the experience, even if we were approaching the question from theoretically neutral grounds. “Parsimony” is a relevant consideration against admitting the existence of something ‘new’ when all else is equal; but the more mechanisms one has to add and the more ad lib combinations of them one has to defend in order to avoid admitting that that something ‘new’ is just what it appears to be, the less “equal” things actually are and the less force considerations of “parsimony” have. All else equal, admitting the existence of a new species is not “parsimonious.” Indeed, when the platypus was first discovered, early investigators believed it was a hoax: “It was plausible, [Dr. George] Shaw thought, that some punk had collected the bill of a duck and an otter or mole’s body, then shipped it off from Australia as a joke.” But the more ad hoc hypotheses these investigators had to add to the ‘hoax’ hypothesis to avoid the conclusion that the platypus was nothing other than just precisely what it seemed to be, the less plausible—and “parsimonious”—the ‘hoax’ hypothesis became. To be clear, I don’t claim that the near death experience is exactly like this; but I do claim that it is somewhere much closer to this than it is to, say, the claim that there are “fairies at the bottom of my garden” which have never been observed.

I am reminded of David Chalmers’ statement about interpreting quantum physics: “[P]hilosophers reject interactionism on largely physical grounds (it is incompatible with physical theory), while physicists reject an interactionist interpretation of quantum mechanics on largely philosophical grounds (it is dualistic).” Likewise here: Skeptics reject realist interpretations of the near death experience—a “scientifically” observed event—simply because it is dualistic; and yet they reject dualism because it is “unscientific.” Yet, it is apparent that “science” in this sentence does not mean “direct scientific observation,” but rather—and much differently—“how we prefer to interpret our scientific observations.” But exactly what are these preferences supposed to be justified by?

When circularity runs this deep, it is clear that something other than the points of the circle are doing all the work of actually holding the circle up. I recall, once again, John Searle’s admission (which I quoted here): “Acceptance of the current [physicalist] views [in philosophy of mind] is motivated not so much by an independent conviction of their truth as by a terror of what are apparently the only alternatives. That is, the choice we are tacitly presented with is between a “scientific” approach, as represented by one or another of the current versions of “materialism,” and an “unscientific” approach, as represented by Cartesianism or some other traditional religious conception of the mind.”

_______ ~.::[༒]::.~ ______

Part 2.

Suppose I know that my niece is in the hospital with a non–life–threatening condition, but I know that she is tied up with tubes that prevent her from leaving the hospital bed. My niece, Jane, has lots of friends; and I am aware as part of my background knowledge of my relationship with her that I don’t know who all of her friends are. Now, suppose that she gives me some piece of information about the hospital that she couldn’t have gotten herself, given that she has been strapped in place without moving: say, that there is a shoe sitting on a ledge outside the window on a different floor of the hospital. And suppose Jane tells me that she found this out because one of her friends, Joy, came by and told her about it.

No one has a direct record of Joy entering the hospital—but she might have simply made her way in without signing her name. I don’t know who Joy is, so I can’t independently verify (as yet) that she was in fact at the hospital that day—but I already realize I simply don’t know who all of Jane’s friends are in the first place, so I clearly can’t use this as grounds for ruling out her existence. Aren’t I justified in believing her? Unless (and until) I can independently prove the truth of some alternative means by which Jane actually came by this information, I think it is obvious that the answer is a clear “yes; of course.” Any ordinary individual would come to accept that a friend named Joy must have stopped by the hospital without any hesitation.

Suppose that rather than it being I who visited Jane, it was my brother Joe who visited her in the hospital and then relayed this story to me second–hand. Even then, don’t I still have adequate reason to accept on the basis of this information itself that Jane must have a friend named Joy, and that Joy must have come by the hospital and mentioned this random detail to my niece, whether I can independently verify these claims or not—unless I can independently refute them, or I find overwhelmingly good reason to conclude that my brother positively must be lying? Once again, I think it is obvious that the answer is a clear “yes; of course.” Most people would consider it flagrantly absurd if I were to insist that everyone involved must absolutely and positively disprove even the bare possibility that a worker at the hospital, or one of Jane’s friends whose names I already know, could even conceivably have relayed this information to Jane instead before I would simply accept that there must be a friend named Joy I haven’t met yet. Such strict standards would lead me to deny the existence of friends Jane actually does in fact have, and the occurrence of events which actually did in fact take place, on a regular basis.

The key factors in my evaluation of the truth of what I am told in this story all clearly relate to my background knowledge about the factors involved in the situation. Relevant background knowledge here includes my belief that Jane likely has a number of friends I don’t know about, my belief that Jane and Joe are generally honest people who have no reason to lie to me, and the belief that it is possible sometimes for people to visit a hospital without necessarily leaving an official record of their visit. Or that people sometimes go by nicknames that are not related to their legal names (so that “Joy’s” visit might have in fact been recorded, but under a different name—perhaps her real name is “Matilda,” and she goes by something different quite simply because she hates the name).

_______ ~.::[༒]::.~ _______


But now, suppose that rather than telling me that a friend named Joy came by the hospital and told her about the shoe sitting on a ledge on a different floor of the hospital, Jane tells me that she went out–of–body during a near–death experience during her operation in which she spotted the location of the shoe (or suppose that Joe relays to me second–hand that Jane made this claim). As before, I can neither positively prove nor positively refute the claim that this actually occurred. How, then, should I evaluate the likelihood that this is true, and what (if anything) makes this situation different from the one before?

Many investigators would hold this claim to a tremendously higher standard than they would hold against the claim that someone named Joy had visited and relayed this information to Jane—they would say that if it’s even conceivable that Jane could have obtained this information some other way (or that Joe might be lying to me), I shouldn’t even consider believing it for a second, and I would be absolutely foolish to do so. What (if anything) justifies that being the case here if it is not the case when Jane tells me that it was Joy (who I have likewise never seen for myself) who told her about the shoe?

This isn’t a statement that the skeptics themselves will protest: skeptics quite simply do not put this possibility on equal grounds with the alternatives. When skeptics address near–death experiences, they generally don’t accept a need to prove that some particular alternative explanation is true—they only see the need to show that an explanation of some other kind is conceivable; whereas the proponent of the NDE explanation is expected to definitely prove that the NDE explanation is the only possible explanation for what took place. If this is justified, it can only be justified because relevant background considerations justify it. And what are these background considerations?

Once again, the background consideration here is the philosophical conviction that the blind motion of the physical processes of the brain produces subjective conscious experiences merely as a secondary, epiphenomenal byproduct. I have given my reasons for considering this conviction not only misguided but preposterous repeatedly. And yet, the only justification ever actually given in attempted support for it is the fact that, at least in ordinary circumstances, there are correlations between the objective, quantifiable state of the physical brain and the subjective, qualitative state of the conscious subject’s experience—and these are exactly the correlations which, we have seen, appear to fall apart in the case of the near death experience.

There is a correlation between the event of flipping a light switch and the event of a light bulb turning off and on—but it simply doesn’t follow that the existence of the light bulb—or the existence of light itself—is a product of (much less identical to) the motion of light switches. If we shatter a glass prism, the visible light spectrum will disappear; but it doesn’t follow that the structure of the prism is identical to the visible light spectrum—nor even that the prism “produces” it: the prism simply allows what is already present within the white light which enters it to become visible.  Pressing the keys on an organ will occasion our hearing sounds, but the air which is actually responsible for these sounds is neither identical to the activity of nor strictly produced by the keys—the keys work by releasing air which is already present inside of the air–chamber. The argument established across thousands of words throughout this series has been that the idea that consciousness and the brain are identical is plainly false without either an radical eliminativist redefinition of consciousness (false for one set of reasons) or a radical panpsychist redefinition of matter (false for another set of reasons); and the idea that consciousness is produced by the brain would have to entail epiphenomenalism (false yet again for its own set of reasons). I can only insist to readers unconvinced that the issues can’t reasonably be summarized, and it would take careful consideration of the points discussed across this series to understand why I think this is unavoidably and absolutely true: not only are there plenty of viable alternatives which account for the interrelationship between physical states of the brain and subjective states of consciousness every bit as effectively as the  “identity” or “productive” theories, the “identity” and “productive” theories are in my view absolutely definitively not in fact even potentially viable accounts of that relationship at all.

In a lecture presented to Harvard University in 1898 (which I previously excerpted from here), William James said: “Suppose … that the whole universe of material things-—the furniture of earth and choir of heaven—should turn out to be a mere surface–veil of phenomena, hiding and keeping back the world of genuine realities. … Suppose … that the dome, opaque enough at all times to the full super–solar blaze, could at certain times places grow less so, and let certain beams pierce through into this sublunary world. …Only at particular times and places would it seem that, as a matter of fact, the veil of nature can grow thin and rupturable enough for such effects to occur. But in those places gleams, however finite and unsatisfying, of the absolute life of the universe, are from time to time vouchsafed. … Admit now that our brains are such thin and half–transparent places in the veil. What will happen? Why, as the white radiance comes through the dome, with all sorts of staining and distortion imprinted on it by the glass, or as the air now comes through my glottis determined and limited in its force and quality of its vibrations by the peculiarities of those vocal chords which form its gate of egress and shape it into my personal voice, even so the genuine matter of reality, the life of souls as it is in its fullness, will break through our several brains into this world in all sorts of restricted forms, and with all the imperfections and queernesses that characterize our finite individualities here below.

According to the state in which the brain finds itself, the barrier of its obstructiveness may also be supposed to rise or fall. It sinks so low, when the brain is in full activity, that a comparative flood of spiritual energy pours over. At other times, only such occasional waves of thought as heavy sleep permits get by. And when finally a brain stops acting altogether, or decays, that special stream of consciousness which it subverted will vanish entirely from this natural world. But the sphere of being that supplied the consciousness would still be intact; and in that more real world with which, even whilst here, it was continuous, the consciousness might, in ways unknown to us, continue still. You see that, on all these suppositions, our soul’s life, as we here know it, would none the less in literal strictness be the function of the brain. The brain would be the independent variable, the mind would vary dependently on it. But such dependence on the brain for this natural life would in no wise make life behind the veil impossible.”

One way or another, the experiences had by those who approach death are perfectly compatible with James’ picture.

_______ ~.::[༒]::.~ _______


One of the most intriguing elements of the near death experience are the many directly corroborated reports that, in fact, events like the one previously discussed actually do, in fact, happen. The story just told is one directly confirmed in a book published in 1995 by a first–hand witness: the social worker Kimberly Clark, who—initially skeptical—decided to look for that shoe, so as to placate the patient, only to be surprised to find a blue shoe in exactly the condition which “Maria” had claimed it was in. (Her report of the event can be read here: according to her direct testimony, the shoe was not visible from the ground, and there was no way “Maria”—“literally plugged into the wall,” she writes—could have moved. And it seems horribly cynical to resort to arguing that Maria must have seen the shoe on the ride in, and saved the observation for exploitation later). Later, Kimberly Clark became a co–founder of the Seattle division of the International Association for Near Death Studies (IANDS).

While it was true in the past that few cases of this kind were particularly well–corroborated, today there are multiple cases where first–hand witnesses have recorded their observations of such instances of “veridical perception” in print, describing the transformation of their skepticism and surprise into conviction; enough that were this an ordinary event, we would have more than accepted its reality. The only reason for skepticism remaining is an a priori designation of the probability of such an event being possible as incredibly low on the basis of nothing other than the philosophical assumption that conscious experience can be only the epiphenomenal byproduct of physical brain activity and nothing more. In another case of cardiac arrest discussed by Pim van Lommel (here), a subject was discovered lying in a meadow for at least a full half an hour prior to his arrival at the emergency room, in a state of coma and cyanosis. Yet, a CCU nurse reported that, days later, he was able to provide accurate descriptions of many of the specific, unexpected circumstances of his transfer to the hospital.

As van Lommel presents his report: “During night shift an ambulance brings in a 44–year old cyanotic, comatose man into the coronary care unit. He was found in coma about 30 minutes before in a meadow. When we go to intubate the patient, he turns out to have dentures in his mouth. I remove these upper dentures and put them onto the ‘crash cart.’ After about an hour and a half the patient has sufficient heart rhythm and blood pressure, but he is still ventilated and intubated, and he is still comatose. He is transferred to the intensive care unit to continue the necessary artificial respiration. Only after more than a week do I meet again with the patient, who is by now back on the cardiac ward. The moment he sees me he says: ‘O, that nurse knows where my dentures are.’ I am very, very surprised. Then the patient elucidates: ‘You were there when I was brought into hospital and you took my dentures out of my mouth and put them onto that cart, it had all these bottles on it and there was this sliding drawer underneath, and there you put my teeth.’ I was especially amazed because I remembered this happening while the man was in deep coma and in the process of CPR. It appeared that the man had seen himself lying in bed, that he had perceived from above how nurses and doctors had been busy with the CPR. He was also able to describe correctly and in detail the small room in which he had been resuscitated as well as the appearance of those present like myself.” (You can read the full interview here, and see a response to a skeptic’s criticisms here).

In yet another case reported in a video interview with cardiac surgeon Dr. Lloyd Rudy, a patient once again reported accurate and unusual details of events occurring prior to and during resuscitation efforts: ““it was close to 20, 25 minutes that this man recorded no heartbeat, no blood pressure, and the echo showing no movement of the heart—just sitting. And all of a sudden we looked up, and this surgical assistant had just finished closing him, and we saw some electrical activity. Pretty soon, the electrical activity turned into a heartbeat. Very slow, 30 or 40 a minute … he recovered. And for the next ten days, two weeks, all of us went in and were talking to him about what he experienced, if anything. And he talked about the bright light … but the thing that astounded me was that he described that operating room, floating around, and saying ‘I saw you, and [the other dotor] in the doorway with your arms folded, talking; I didn’t know where the anesthesiologist was, but he came running back in; and I saw all of these post–its sitting on this TV screen’—and what these were, any call I got, the nurse would write down who called and the phone number, … and the next post–it would stick to that post–it … he described that. There’s no way he could have described that before the operation—because we didn’t have any calls.”

In addition to direct studies of recall of NDE memories, cases like these all go a long way to discredit the skeptical counterclaim that near death experiences don’t really happen during periods of clinical death, but are only reconstructed afterwards. Ring & Lawrence (1993) record three other cases of “veridical perception” which were corroborated by first–hand witnesses. Bruce Greyson investigated yet another case where a patient described one of the surgeons “flapping his arms as if trying to fly.” As he summarizes: “Both the surgeon and the cardiologist in this case confirmed that … to keep his hands from touching any surface between the time he “scrubs in” and the time he actually begins the surgery, he has developed the habit of holding his hands against his chest and pointing with his elbows to give instructions to other persons in the operating room. The cardiologist confirmed that Mr. Sullivan had described this unusual behavior to him shortly after regaining consciousness following the surgery.” [3]

One of the only attempts to study these reports directly was a study performed by Michael Sabom in 1982. Initially a skeptic inspired to investigate by Raymond Moody’s 1975 Life After Life, Sabom took 32 patients who reported out–of–body perceptions during near death experiences, and compared them to a control group of 25 patients who had had one or more episode of cardiac arrest without a near death experience. He asked the NDE group to describe their out–of–body perceptions, and compared these accounts to the control group’s attempts to describe their resuscitations. If the NDE group were no more accurate in their descriptions than the control group, this would lend plausibility to the idea that these accounts could possibly have been reconstructions produced after the fact, rather than truly veridical perceptions at the time supposed.

The results? Whereas 20 out of 23 who attempted the task in the control group made at least one major error, no members of the NDE group did—and furthermore, 6 members of the NDE group accurately recorded specific unusual details, some of which were peculiar to that patient’s own personal case. For example, one man who developed ventricular fibrillation described how a nurse picked up “them shocker things” and “touched them together,” before “everybody moved back away from it.” As Sabom explains (p.98), rubbing the paddles together to lubricate them and then standing back to avoid being shocked is a common procedure. Others talked about which family members were or weren’t in the waiting room, or the type of gurney that was used to wheel them in to the hospital. A nurse, Penny Sartori, whose experiences over 17 years working in intensive care units inspired her to turn to research on the near death experience (for which she was awarded a Ph.D), replicated his findings and recorded the results in her 2008 monograph, The Near–Death Experiences of Hospitalised Intensive Care Patients: A Five Year Clinical Study.

In a 2009 study also recorded in The Handbook of Near Death Experiences published with Bruce Greyson, Janice Miller Holden finds that of 93 cases of “veridical perception” reported in the literature on near death experiences, 40 were able to be verified as corroborated by an independent witness; 40 were reported by the experiencer to have been corroborated by an independent witness who was no longer available; and only 13 relied solely on the experiencer’s report. Furthermore, of all of these cases, 86 were found to be completely accurate, 6 were only partially corroborated or had some errors; and only the one remaining case was completely inaccurate.

There may not be the type of evidence here that counts as “proof” of the kind required to completely convince a skeptic who wants to know that absolutely no other conceivable explanation is even hypothetically possible before accepting that the realist interpretation of the near death experience could be a reasonable conclusion (nothing could actually meet this burden to begin with—as a last resort, a skeptic who is determined enough can simply dismiss the validity of every report, or every witness’ credibility or memory), but there is as much evidence as we could possibly expect to have given the extent to which the phenomena has actually been capable of being studied at all—and it is certainly enough to shift things even farther in the direction of putting the skeptic in a “platypus is a hoax”–type position, as we continue to add more and more evidence to the picture which the skeptic must find some way to explain away despite the fact that the realist interpretation obviously unifies, in a single explanation, all of it.

_______ ~.::[༒]::.~ _______


Individuals who are blind from birth apparently do not have visual dreams. A 1999 review of 372 dreams in 15 individuals at the University of Hartford confirmed this, while finding that those who go blind before the age of 5 are mostly indistinguishable from the blind from birth, whereas those who lose their sight around the age of 7 or later “continue to experience at least some visual imagery, although its frequency and clarity often fade with time” (those who lose their sight between the ages of 5 and 7 can go either way).

Yet, in the book Mindsight, researchers Kenneth Ring and Sharon Cooper document their studies on experiences in the blind—who report near–death experiences exactly like those reported by the sighted, apparently with the same visual content. The authors quote from a recorded interview between one of their subjects—Vicki Umipeg—and another researcher, Greg Wilson: (GW: “Could you see anything?”) Vicki: “Nothing, never. No light, no shadows, no nothing, ever.” (GW: “So the optic nerve was destroyed to both eyes.”) Vicki: “Yes, and so I’ve never been able to understand even the concept of light.” As she described her experience: “I was pretty thin then. I was quite tall and thin at that point. And I recognized at first that it was a body, but I didn’t even know that it was mine initially. Then I perceived that I was up on the ceiling, and I thought, ‘Well, that’s kind of weird. What am I doing up here?’ I thought, ‘Well, this must be me. Am I dead?’ I just briefly saw this body, and … I knew that it was mine because I wasn’t in mine.”

She continued: “I think I was wearing the plain gold band on my right ring finger and my father’s wedding ring next to it. But my wedding ring I definitely saw … That was the one I noticed the most because it’s most unusual. It has orange blossoms on the corners of it. This was the only time I could ever relate to seeing and to what light was, because I experienced it.

It seems strange to suppose that reductions in the intensity of brain activity might be accompanied at some times by a reduction in the intensity of subjective experience, and at other times by an increase; or that some damage to the eyes might damage or obliterate the subjective experience of sight, whereas death should restore it—but a dualistic interpretation of the mind–body relationship can accommodate correlations in both of these directions, whereas a physicalist interpretation requires that they be in only one direction at all times. Suppose I am sitting inside of a theater, with a screen interpreting visual data from outside the building I am in, the speakers interpreting auditory data from outside, and so on: some subtle damage to the machinery of the theater’s visual processing system might destroy my ability to “see what is outside” completely—and yet, bashing down one of the walls would, nonetheless, influence my capacity to “see what is outside” in the opposite direction.

In a 1997 publication in the Journal of Near Death Studies, Cooper and Ring provide a more succinct presentation of their research: “Of our 21 NDErs, 15 claimed to have had some kind of sight, three were not sure whether they saw or not, and the remaining three did not appear to see at all. All but one of those who either denied or were unsure about being able to see came from those who were blind from birth, which means that only half of the NDErs in that category stated unequivocally that they had distinct visual impressions during their experience. Nevertheless,it is not clear by any means whether those respondents blind from birth who claimed not to have seen were in fact unable to, or simply failed to recognize what seeing was. For instance, one man whom we classified as a nonvisualizer told us that he could not explain how he had the perceptions he did because “I don’t know what you mean by ‘seeing.’”

As would be expected if these subjects were actually experiencing sight for the first time, even those who readily classified what they experienced as “sight” expressed bafflement—or even fear. Vicki Umipeg stated to interviewers: “I had a real difficult time relating to [sight] because I’ve never experienced it. And it was something very foreign to me. … Let’s see, how can I put it into words? It was like hearing words and not being able to understand them, but knowing that they were words. And before you’d never heard anything. But it was something new, something you”d not been able to previously attach any meaning to.” Ring notes that she later used the word “frightening” to describe the adjustment, and records that she described her ability to distinguish between “different shades of brightness,” and could only wonder if this was what sighted people mean by “color.”

Not only does the perceptual experience of sight occur during near death experiences in the blind; so, too, do cases of apparently veridical perception. They write: “[I]n at least some instances, we are able to offer some evidence, and in one case some very strong evidence, that these claims are in fact rooted in a direct and accurate, if baffling, perception of the situation.” After discussing another fascinating case that turned out to lack perfect verification (but which I think one could still reasonably believe—the witness who was recovered just couldn’t recall from memory the pattern on a piece of clothing identified by the patient to confirm their report), they move on to the case of “Nancy,” (see p.22).

Nancy “underwent a biopsy in 1991 in connection with a possible cancerous chest tumor. During the procedure, the surgeon inadvertently cut her superior vena cava, then compounded his error by sewing it closed, causing a variety of medical catastrophes including blindness, a condition that was discovered only shortly after surgery when Nancy was examined in the recovery room. She remembers waking up at that time and screaming, “I’m blind, I’m blind!” Shortly afterward, she was rushed on a gurney down the corridor in order to have an angiogram. However, the attendants, in their haste, slammed her gurney into a closed elevator door, at which point the woman had an out–of–body experience. Nancy told us she floated above the gurney and could see her body below. However, she also said she could see down the hall where two men, the father of her son and her current lover, were both standing, looking shocked. She remembers being puzzled by the fact that they simply stood there agape and made no movement to approach her. Her memory of the scene stopped at that point.”

They continue: “In trying to corroborate her claims, we interviewed the two men. The father of her son could not recall the precise details of that particular incident, though his general account corroborated Nancy’s, but her lover, Leon, did recall it and independently confirmed all the essential facts of this event. …  It should be noted that this witness has been separated from our participant for several years and they had not even communicated for at least a year before we interviewed him. Furthermore, even if Nancy had not been totally blind at the time, the respirator on her face during this accident would have partially occluded her visual field and certainly would have prevented the kind of lateral vision necessary for her to view these men down the hall. But the fact is, according to indications in her medical records and other evidence we have garnered, she appeared already to have been completely blind when this event occurred. …”

And then, quoting from Leon’s account: “I was in the hallway by the surgery and she was coming out and I could tell it was her. They were kind of rushing her out. … I saw people wheeling a gurney. I saw about four or five people with her, and I looked and I said, ‘God, it looks like Nancy,’ but her face and her upper torso were really swollen about twice the size it should have been. I was still in a state of shock. I mean, it had been a long day for me. You’re expecting an hour procedure and here it is, approximately 10 hours later and you don’t have very many answers. … When I first saw her she was probably, maybe about 100 feet and then she went right by us. … somebody was, like, trying to get into the elevator at the same time and there was some sort of a ‘Oh, I can’t get in, let’s move this over a little bit,’ kind of adjusting before they could get her into the elevator. But it was very swift … She was just really swollen. She was totally unrecognizable. I mean, I knew it was her but—you know, I was a medic in Vietnam and it was just like seeing a body after a day after they get bloated. It was the same kind of look.”

They conclude the paper from pp.24–46 with a discussion of implications, asking whether apparent sight during NDEs in the blind could be accounted for by some other means, such as blindsight: “First of all, patients manifesting [blindsight] typically cannot verbally describe the object they are alleged to see, unlike our respondents who, as we have noted, were usually certain about what they saw and could describe it often without hesitation. In fact, a cortically blind patient, even when his or her object identification exceeds chance levels, believes that it is largely the result of pure guesswork. Such uncertainties were not characteristic of our respondents. … perhaps most crucially of all, blindsight patients, unlike our respondents, do not claim that they can ‘see’ in any sense. As Humphrey wrote: ‘Certainly the patient says he does not have visual sensation. …Rather he says, ‘I don’t know anything at all—but if you tell me I’m getting it right I have to take your word for it’’ (1993, p. 90). This kind of statement is simply not found in the testimony of our respondents who, on the contrary, are often convinced that they have somehow seen what they report. Thus, the blindsight phenomenon, however fascinating it may be in its own right, cannot explain our findings.”

In any case, whatever alternative mechanism one might possibly propose for these examples, all participants describe the experience as being radically unlike anything else they have ever experienced. Vicki, for example, explicitly says that there is “No similarity, no similarity at all” between the sight she experienced during her near death experience and her dreams (which she describes as containing no visual imagery). Whatever these mechanisms might be, why should they become active only when the blind patient is approaching death and their brain is in the most disrupted, disorganized state it can be besides actual, irreversible death? Once again, the skeptic can insist on finding loopholes—no matter what premise an individual wants to hold to, if he intends to hold to it against all odds, then in many cases no argument in principle will be capable of convincing him—he can simply modus tollens the premises of any argument meant to defeat that premise. Regardless of whether what we have in these cases is “proof” in the requisite sense, I think it is clear that we have still yet more evidence that renders belief in the reality of these experiences still yet more reasonable—for, once again, the skeptic must produce still yet more ad hoc hypotheses to explain away the platypus, whereas the conclusion that these experiences are simply what they appear to be can easily account for all of it at once. And on both that basis as well as on the basis of overwhelming philosophical problems facing any attempt—which so far has, by even the admission of materialists themselves [2] come nowhere near to completion in the first place—to reduce consciousness to mechanism in principleI can only conclude that belief in the reality of the near death experience is entirely justifiable and reasonable, whether every imaginable alternative explanations can be definitively proved categorically inconceivable or not—as this is simply an illegitimate standard to impose on the question of whether or not a conclusion can be considered justifiable and reasonable.

_______ ~.::[༒]::.~ _______


One final supplementary point: suppose the near death experience occurs precisely when it appears to. We have good reason to believe that it does in the existence of cases of veridical perception referenced above—and these would count as compelling evidence that the experience occurs at precisely the time it appears to even if the experience were to turn out to be a pure hallucination of some sort after all; for at the very least, the hallucination would be occurring, and somehow incorporating these perceptual details, at the time of clinical death and not after. Consider the way near death experiencers so widely report being deliberately “sent back” by the figures they encounter in the experience before it’s over. If the near death experience is simply the ‘hallucinatory phantasmagoria’ of a dying brain, how does it know to build this into the narrative of the vision from a state of severely impaired near–unconsciousness in advance of the actual resuscitation? Every single person reading this knows that even our dreams don’t typically end through any sequenced narrative marking our transitioning into wakefulness—they usually just end. At most, we might be familiar with falling asleep in a vehicle and watching our dream incorporate something like a face–first trip on a branch in the woods as we snap awake in response to riding over a particularly jarring bump. But few people ever have an experience of anything like the characters in their dream explaining to them in an elaborate narrative how it’s approaching time to wake up. And this, despite the fact that (1) our brains are not severely physiologically impaired during dreaming, and (2) the process of waking is usually more or less led and managed by the same brain conducting the dream—so it should be far more capable here than in the case of resuscitation from death of coordinating the contents of the dream with reality in advance. How, then, should the brain suddenly acquire the ability to synchronize its hallucinations with reality so far in advance—with no one reporting that they came to consciousness out of a near death experience before the figure could actually finish sending them back into their bodies, mid–sentence?

_______ ~.::[༒]::.~ _______

[1] Best Evidence: 2nd Edition by Michael Schmicker, which cites John C. Gibbs,
Moody’s Versus Siegel’s interpretation of the near-death experience: An evaluation based on recent research.

[2] Paul Churchland: “Consciousness is almost certainly a property of the physical brain. The major mystery, however, is how neurons achieve effects such as being aware of a toothache or the smell of cinnamon. Neuroscience has not reached the stage where we can satisfactorily answer these questions.” 

Francis Crick: “What remains is the sobering realization that our subjective world of qualia—what distinguishes us from zombies and fills our life with color, music, smells and other vivid sensations—is possibly caused by the activity of a small fraction of all of the neurons in the brain, located strategically between the inner and outer worlds. [But] how these act to produce the subjective world that is so dear to us is still a complete mystery.”

Even though these authors profess confidence (and quite definitely resounding confidence elsewhere in other writings) that consciousness is “produced” by the processes of blind physical mechanism in the brain, they confess—in more honest moments—that they have no idea “how” this could be the case. Leaving aside the arguments I’ve stood by throughout this series that this very concept is simply confused in principle, how does someone justify claiming that they know an empirical claim is true without having any idea “how?”  Making analogy with the dualist’s claim that interaction takes place is unfounded for reasons I explain.

[3] I need donations of my own here just to try to survive—but if you’re interested in finding out more about these cases, consider supporting the effort to translate Titus Rivas (et al.)’s work compiling more than 80 new verified cases of corroborated perception into English from Dutch at the International Association of Near Death Studies (IANDS).


Consciousness (XII) — From Chalmersian “Laws” to Transmigration

_______ ~.::[༒]::.~ _______


Questions about the ontological status of “laws” feature prominently in many debates between atheistic and theistic philosophers. In his 1859 Treatise on Theism, Francis Wharton (the author of Wharton’s Rule of Concert of Action which states that guilt of conspiracy to commit a crime requires more parties than are necessary to commit it) writes: “The existence of a comprehensive and beneficent system of law, in fact, is the strongest evidence of the existence of a Divine lawmaker… There is a vital distinction between a causal law, i.e. one that rules the genesis of events, and an empirical law, one that merely registers their occurrence. There is a vital distinction, for instance, between the time–tables issued from period to period by the officers of an extended railroad and the systematized observation of running by even a long and accurate series of travelers. The records of the latter are open to error… let a traveler rely on the latter, and he will find that though in a mere statistical point of view the results, like empirical laws in general, are interesting as helps to the memory, and useful as the base for business tables, they are in themselves of no permanent and absolute value as indications of the future. … The results of empirical observation are, therefore, incapable of becoming permanent laws for the future.”

Emanuel Haldeman–Julius (founder of Haldeman–Julius Publications) and Rev. Burris Jenkins debated the question in a 1930 debate titled “Is Theism a Logical Philosophy?” In the negative argument, Haldeman–Julius writes: “The fundamental error is found in the theist’s habit of confusing a human law with a natural “law.” A legislature passes a law saying that after a certain date it shall be illegal to behave in a certain way, to have liquor, for instance. If you break this law, and are not caught, nothing happens except the usual next morning headache. If you are caught, you may be sent to the penitentiary. Or let us say that the people make up their minds to break the law so flagrantly that enforcement falls down and the law is either ignored or repealed. That is a human law. That implies a lawmaker, of course. But it is treacherous logic to say the “laws” of nature are the result of the will of a lawmaker. The scientific use of the word “law” as applied to nature means only this: things in nature act in certain ways — their movements are Uniform — and when you use the word “law” you merely describe how things are observed to conduct themselves.”

In the modern day, the theistic philosopher Keith Ward writes: “The existence of laws of physics does not render God superfluous. On the contrary, it strongly implies that there is a God who formulates such laws and ensures that the physical realm conforms to them.” Bede Rundle, in an atheistic response in ‘Why There is Something Rather than Nothing,’ writes that: “[I]t is wrong to regard laws of Nature as basic. That status goes to whatever it is—the characteristics and behaviour of particles, gases, and so forth—that the laws codify. Indeed, the notion of a natural or physical law, or at least the use to which this is put, is often questionable. Not because there is no place for the notion, but because those who insist on the reality of such laws tend to model them on legal laws, as if the natural variety likewise enjoyed an independence of the actual behaviour of individuals, to the point even of antedating and dictating that behavior. … it is not as if God might rewrite the laws of Nature and inanimate things, being now differently governed, would thereupon proceed to behave differently—though just some such view was in no way foreign to the seventeenth–century conception of laws of Nature as divine commands. With legal laws there is an intelligible relation between the law and behaviour: understanding a law, and having a motive to act in accordance with it, we act. Substitute inanimate bodies for comprehending agents and we sever any such intelligible tie; the law is in no sense instrumental in bringing about accord with it. … What would God have to do to ensure that atoms, say, behave the way they do? Simply create them to be as they in fact are.  Atoms having just those features which we currently appeal to in explaining the relevant behaviour, it does not in addition require that God formulate a law prescribing that behaviour. Again, the point is addressed by David Marshall Brooks, in The Necessity of Atheism: “A “law” of nature is not a statute drawn up by a legislator; it is the interpretation and the summation which we give to the observed facts. The phenomena which we observe do not act in a particular manner because there is a law; but we state the “law” because they act in that particular manner. [So] it cannot be said that the laws of nature are the result of a lawmaker….”

John Lennox writes in God’s Undertaker: Has Science Buried God? that: “We certainly expect to be able to formulate theories involving mathematical laws that describe natural phenomena, and we can often do this with astonishing degrees of precision. However, the laws that we find cannot themselves cause anything. Newton’s laws can describe the motion of a billiard ball, but it is the cue wielded by the billiard player that sets the ball moving, not the laws. The laws help us map the trajectory of the ball’s movement in the future (provided nothing external interferes), but they are powerless to move the ball, let alone bring it into existence.” He even makes the interesting note that “the much maligned William Paley” recognized the point: “It is a perversion of language to assign any law, as the efficient, operative cause of any thing. A law presupposes an agent; for it is only the mode, according to which an agent proceeds: it implies a power; for it is the order, according to which that power acts. Without this agent, without this power, which are both distinct from itself, the law does nothing; is nothing.”

Likewise, C. S. Lewis writes in Chapter 1 of Mere Christianity, that “When you say that falling stones always obey the law of gravitation, is not this much the same as saying that the law only means “what stones always do”? You do not really think that when a stone is let go, it suddenly remembers that it is under orders to fall to the ground. You only mean that, in fact, it does fall. In other words, you cannot be sure that there is anything over and above the facts themselves, any law about what ought to happen, as distinct from what does happen. The laws of nature, as applied to stones or trees, may only mean “what Nature, in fact, does.”.” In the atheist Richard Carrier’s response to Victor Reppert’s presentation of one of C. S. Lewis’ arguments in C. S. Lewis Dangerous Idea, he writes: “The “law” of gravity … ‘is’ in every place and time where the physical conditions that manifest gravity exist.” A thousand more examples await anyone who searches a phrase like “laws lawgiver atheism Christianity.” Keep this point in mind and try not to get lost in the chaos of changing topics—the reason for it will eventually become clear: “laws” are only our descriptions of what actually existing things do. By and large, the atheist’s only option is to say that the actually existing things whose behavior are physical objects and forces themselves. The theist can (though does not necessarily) adopt a kind of idealism and say that the “agent” responsible for the law is in fact the ordering power of the mind of God—and not the intrinsic properties of physical objects and forces themselves—but to avoid this possibility, the atheist’s only option—again—is to contend that it is the intrinsic properties of physical objects and forces themselves which are actually directly responsible for the behaviors which we label, after the fact, in the terminology of “laws.”

_______ ~.::[༒]::.~ _______


This won’t, perhaps, be the most efficient way for me to make the following point, but I’d like to illustrate it this way for a reason. Consider, for a moment, the Kalām cosmological argument. Kalām attempts to proceed from the premise that the Universe began to exist (supported by empirical premises acquired from Big Bang cosmology, and philosophical premises regarding paradoxes implying that actual infinites—such as an actually infinite past—cannot possibly exist in reality) and the premise that anything that begins to exist must have a cause, to the conclusion (supported by a variety of further considerations) that the cause of the Universe must be “God” (a changeless, timeless, singular disembodied mind). What are the most plausible ways out of this argument for the atheist?

Some will simply reject the argument that everything that begins to exist must have a cause—usually, this is done by arguing that the initial coming into existence of the Universe is a special case because, as Gott, Gunn, Schramm, and Tinsley write: “…time [itself was] created in that event … [so] it is not meaningful to ask what happened before the big bang; it is somewhat like asking what is north of the North Pole.” For my part, I simply cannot bring myself to find this approach even slightly plausible. It is incoherent to ask what is north of the North Pole, but it is not incoherent to ask what is above the North Pole—there is something above it even if there isn’t something “north” of it, and if someone were to ask what was “north” of the North Pole, this would, in all probability, be what they actually meant. Notice that in this statement, Gott, Gunn, Schramm and Tinsley themselves cannot escape causal language: “time,” they write, “[was] created.” While it should go without saying that the ways in which we happen to use language don’t necessarily entail any particular philosophical truths, I think in this case it reflects the fact that we simply can’t coherently think in any other but causal and temporal terms—and I can only infer that this is because in this case the idea simply is as incoherent as it would be to say that the North Pole exists with nothing “above” it (not even space?). Critics of this approach make the further point that we can easily see that it isn’t a logical necessity that all causes precede their effects “within time”—for example, Kant famously asked his readers to imagine a Universe where a heavy ball sat on a cushion from the moment that Universe came to exist: the ball would be the cause of the depression in the cushion even though the ball (or the pressure from the ball) did not precede the cushion (or the depression within the cushion) in any temporal sequence of events within that Universe—so that even if it were meaningless to ask what came “before” the Big Bang, this still wouldn’t render it meaningless to ask what caused it.

More plausibly, then, the atheist can attack the premise that an infinite past is either logically impossible or scientifically ruled out by modern cosmology. For example, on some multiverse cosmologies, a quantum void of some sort could be the “heart” of all Existence, the changeless center–point from which every contingent, changing universe is born—through quantum physical mechanisms rather than the intentional conscious acts of a God. (Note as well that the very fact that these hypotheses exist already shows that it is not unmeaningful to ask what preceded the Big Bang. We do not know that nothing came before the Big Bang—there are competing hypotheses, and the very idea that the “Big Bang was the objective beginning” scenario should be preferred itself requires the assumption that the notion is coherent to begin with. Asking what happened before the Big Bang is more like asking what is above the highest building we can see than it is like asking what is “north” of the North Pole—if the answer is “nothing,” then that is surprising, and someone who gives this answer has as much of a burden to advance the truth of the claim with a proactive argument as anyone else. If the Big Bang is truly the first moment of “objective time,” then asking what came before it may be like asking what is “north” of the North Pole—but it would have to be demonstrated that we cannot, for that, nonetheless coherently ask the equivalent of whether there is nonetheless something “above” the North Pole. We do not know that the Big Bang is truly the first moment of “objective time”—indeed, it is hard to see what empirical discovery could ever qualify as confirming absolutely that we know we’ve found the first moment of Time Itself—and part of the very question of whether the inference to that interpretation of our historical–cosmological knowledge is viable requires the premise that a state of affairs where there is nothing “above” a given point in space—nothing “before” a given point in time—is a coherent notion in the first place. If we have a priori reason to think that it is not, then that a priori consideration provides a constraint against what an accurate description or explanation will have to look like in principle.) Let me emphasize that nothing peculiar rests on this being my personal perspective on Kalām—I simply want to use it for an analogy in a point I will tie in much later.

So in any case, look what has happened here: if we take this approach, then we have concluded that the physical universe itself must be eternal in order to escape the need to explain its coming–to–be in a way that might entail the need to account for it with reference to something non–physical—even though we cannot confirm its eternality (any more than we could confirm its finality) ‘empirically.’ This is, in my view, a legitimate way of reasoning—and I think if you reject it, then you can’t escape the force of the theistic conclusions that would otherwise follow.

Once again, keep this point in mind for later as we now move on and try not to get lost in seeming chaos as the subject once again proceeds through another quite drastic change: the most plausible route for the atheist through the Kalām cosmological argument (in my estimation) is to insist that the beginning of time simply doesn’t need to be accounted for with some further explanation because it had no beginning—time is eternal, and the past is infinite. Even if my estimation of this argument is wrong, the analogy will still be relevant, because it at the very least could have come out that this would have been the most reasonable way to think about Kalām.

_______ ~.::[༒]::.~ _______


Perhaps the single biggest problem with Chalmers’ philosophy is that it entails epiphenomenalism. My arguments will be to the conclusion that a much more substantialist and interactionist view of consciousness than Chalmers’ is the only way to avoid these implications. Chalmers approach to getting out of the threat that epiphenomenalism poses to the validity of his view is to argue that “what is a problem for all is a problem for none”—namely, to try to say that epiphenomenalism is a threat to the interactionist account as well, in just the same way, so it therefore poses no special problem for his view in particular. I’m going to take the liberty of stating that Chalmers is patently wrong on this point. Contra Chalmers, epiphenomenalism does pose a specific peculiar threat to his view which it absolutely does not for an interactionist view. And every fundamental issue regarding the nature of human consciousness from this point forward intimately turns on this one issue about which Chalmers is unequivocally mistaken.

“What is a problem for all is a problem for none” is a valid approach, if the reasoning underlying it is actually valid—but in this case, Chalmers’ reasoning quite plainly is not. Ironically, he realizes his own mistake within the very paragraphs in which he presents this argument—and then steps back and repeats it anyway. We’ll begin to see how significant the consequences of correcting this mistake are soon. He writes: “…All versions of interactionist dualism have a conceptual problem that suggests that they are less successful in avoiding epiphenomenalism than they might seem….” Why? Because “ … even on these views, there is a sense in which the phenomenal is irrelevant.” And what sense is that? Experience is irrelevant even on interactionism, according to Chalmers, in the sense that we can always describe a sequence of events without including experience in that description: “We can always subtract the phenomenal component from any explanatory account, yielding a purely causal component.” Thus, consciousness on the interactionist’s account has “ … a sort of causal relevance but explanatory irrelevance.” This is the sole line of reasoning on which Chalmers’ decision to commit to epiphenomenalism despite its apparent problems rests: even if consciousness does in fact play a causal role in reality, we could talk as if it doesn’t—therefore, it doesn’t matter whether our theory allows that consciousness actually does play a causal role in reality or not. And so, “the denial of the causal closure of the physical therefore makes no significant difference in the avoidance of epiphenomenalism.

Chalmers’ reasoning on this point is uncharacteristically sloppy, and mired in straightforward and inexcusable confusion. It doesn’t matter whether we can talk as if consciousness plays no causal part in reality. What matters is whether or not it actually does. On Chalmers’ view, it doesn’t. On an interactionist account, it does. With respect to the threat of epiphenomenalism, that is everything that matters (and it matters tremendously). We will see that correcting this error has deep consequences for where the lines of reasoning I’ve been defending and arguing for here (some points of which are borrowed from Chalmers or at least take Chalmers as their starting point) ultimately end up taking us.

The question posed by epiphenomenalism is whether consciousness actually is a causally relevant feature of reality—and Chalmers does not actually deny that consciousness is causally relevant on the interactionist picture of reality. He merely suggests that the “explanatory irrelevance” which he claims consciousness has on interactionism is somehow just as bad as the causal irrelevance which consciousness has on his view. But what does “explanatory irrelevance” actually mean, here? What concept is Chalmers actually using that phrase to express? It means here that we can create a story and leave a given feature out of our description. But from the fact that we can create such a story, it does not follow that this “story” actually describes reality as it actually is. I can create a story of World War II that does not mention Hitler, or anti–semitism. Does that give Hitler, or anti–semitism, “a sort of causal relevance but explanatory irrelevance” with respect to the events of World War II? (What the hell, Chalmers?)

If an “explanation” is something that actually describes realitythen consciousness cannot have “explanatory irrelevance” in spite of “causal relevance”—period. An “explanation” that does not explain why reality actually is as it actually is is no “explanation” at all. “Causal relevance” would mean that consciousness as such is, in fact, part of the reality that we want to describe. From the fact that we can create false descriptions of reality, nothing of any significance—nothing about reality as it actually is—follows. On interactionism, any true description—that is, any actual “explanation”—will in fact be the one which keeps consciousness intact. I can describe reality without referring to consciousness—but so what? I could also describe World War II without mention of Hitler, or Judaism. I could also describe the phenomenon we call “gravity” without referring to the structure of space–time—by simply restricting myself to talk about material objects, and saying for example that “objects undergo an intrinsic pull to move towards the largest object closest to them.” Does it follow from this that the physical structure of space–time itself is “irrelevant” to reality in any meaningful way on a viewpoint that takes the gravitational force as such to be one of reality’s fundamentals? Absolutely not; and the suggestion would be, quite frankly, idiotic.

On an interactionist view of consciousness, “can” I give an account of any sequence of events which leaves the causal contributions of consciousness out of the story? Sure. But will that story be true? No—and that is the only point that matters. Supposing for a moment it were true that a God created the Universe,  I “could” in that case nonetheless give an account which leaves God’s irreducibly intentional concious act of creation out of the story. Would that make God “explanatory irrelevant” to the world’s creation and would this “explanatory irrelevance” therefore be just as good as atheism? No—because my “story” would be just that—a “story”; and not a real description of why reality actually is as it actually is. If an “explanation” is supposed to actually explain why things actually are how they actually are, then assuming God created the Universe, the account which left God out would not be an “explanation” at all—so if God did in fact create the Universe, God would not be “explanatorily irrelevant” to how the Universe was created. But on Chalmers’ view, when I give those accounts of sequences of human behavior which leave consciousness out of the causal story, those accounts are true—they are accurate descriptions of why what happened actually happened. Consciousness–as–such, on Chalmers’ account, is therefore “explanatorily irrelevant” because it is causally irrelevant. And Chalmers cannot weasel out of that by simply pointing out that we could tell a story which leaves consciousness out even if consciousness were in fact part of the correct story—the fact that we “could” tell that story is irrelevant. I “can” tell a story in which unicorns replaced the role ordinarily played by either God or the Big Bang singularity in the story of the creation of the Universe. The fact that I “can” tell such a story entails absolutely nothing whatsoever about reality except that I am capable of saying something that is untrue about reality. To suggest that that is in any way comparable to the scenario in which either God or a Big Bang singularity actually did in fact play no causal part in bringing the Universe as we know it into existence in reality is a shameless piece of confusion.

_______ ~.::[༒]::.~ _______


However, there is one legitimate basis—in Chalmers’ defense—for his having acquired this confusion: John Eccles, who Chalmers quotes in this section as the substantialist–interactionist example, takes some steps in drawing his account which inadvertently provide unintended support to an intuition which rests on a misunderstanding of dualism. Though Eccles himself seems not to actually commit the mistake in his own mind, he speaks in a way that doesn’t always leave this clear—and in doing so he does the public relations side of interactionism a disservice. Allow me to illustrate.

In 1989, John Eccles published a paper titled “A unitary hypothesis of mind–brain interaction in the cerebral cortex.” Much of the paper consisted of a legitimate argument showing that quantum physics allows viable room (or did given the most up–to–date knowledge of the time) for irreducibly conscious causation. But the way Eccles spoke of conscious causation is unfortunate and extremely amenable to a common conceptual confusion. He writes:   “The hypothesis has been proposed that all mental events and experiences, in fact the whole of the outer and inner sensory experiences, are a composite of elemental or unitary mental experiences at all levels of intensity. Each of these mental units is reciprocally linked in some unitary manner to a dendron … Appropriately we name these proposed mental units ‘psychons.’ … It may seem that in this intimate linkage of dendrons and psychons the new unitary hypothesis of dualist interactionism is merely a further refinement of the materialist identity hypothesis … [but] this is a mistake. Independence of existence is accorded to psychons….” It seems that this way of speaking suggests a model where psychons are analogous to dendrons at least in that they are discrete, quantifiably measurable units. And this naturally predisposes anyone trying to visualize Eccles’ “psychon–dendron interaction” on par with the mechanical type of interaction that takes place between dendrons and dendrons themselves, or between billiard balls on the very Newtonian picture of reality whose accuracy the rest of the paper denies.

On the one hand, Eccles seems to realize the hazards in his way of speaking—for he clarifies: “ Psychons are not perceptual paths to experiences. They are the experiences in all their diversity and uniqueness.”

Yet, on that same page, Eccles draws a picture of the connection between “dendrons” and “psychons.” And this is unfortunate, for it suggests yet again at least implicitly that consciousness is the kind of thing that could possibly be drawn. But anything that could be drawn would be, by definitiona physical–relational structure—and the arguments for dualism proceed largely precisely through illustrating the very fundamental insight that qualitative conscious experiences and intentionality can’t be analyzed or understood through descriptions of mechanistic operations between physical–relational structures in the first place. This makes even accidentally construing consciousness as something that could be analyzable in terms of the causal properties of discretely quantified units incredibly unhelpful—the fact that consciousness is not the kind of thing that could possibly be “graphed” in principle is exactly the very point that the dualist needs to insist until it ‘clicks’ in the mind of his opponent. While Eccles seems to have realized it, it doesn’t pervade his way of speaking or representing the principles he tries to talk about—and that lack spills over into Chalmers’ comprehension of what an interactionist consciousness would be like as well. The fact that we cannot, in principle, visualize interaction between subjective/qualitative consciousness and objective/quantitative physical structure is exactly the point Descartes emphasized to his critics all the way back then: “[the interaction problem] arise[s] merely from [critics’] wishing to subject to the scrutiny of the imagination matters which, by their own nature, do not fall under it.” Eccles simply should not have encouraged this already all–too–easy psychological tendency by trying to “imagine” (e.g., represent with images) the process of interaction even with caveats. 

All Eccles actually wanted to communicate in the section in which this drawing appears is that there is a causal link of some kind between the qualitative properties of a conscious experience and the physical states of the brain—which no one has failed to understand must necessarily be true in some way from the moment it was observed that blows to the head or intake of alcohol or bad food could alter the state of one’s consciousness. Very little is contributed to that point by speaking of “psychons” or implying even by inadvertent accident that the “psychons” composing our qualitative conscious experiences and intentionalistic conscious thoughts could actually be represented by a diagrammatic drawing of physical structures—but opportunity to emphasize something extremely crucial to the interactionist idea is lost.

Qualitatively subjective and intentionalistic consciousness is the very medium in which all our thoughts exist, and these qualitative, subjective, and intentionalistic properties are without exception the sole exclusive mode in which they have their existence. Yet, in all cases except for thinking about consciousness itself, when we turn our attention to causal relationships, we are overwhelmingly used to thinking of mechanical interactions between structures. This is exactly why getting an intuitive grasp on the mind–body problem is so hard—universal habit has it engrained in us in all other cases except this one to think of causation in terms of mechanistic procedures mediating structurally depictable forces and objects through space. However much Eccles may have tried to caveat his illustration with emphasis that it is not a depiction of mind–brain “identity”, Chalmers’ confusion is all the confirmation needed to show that Eccles—and those who defend the idea that consciousness as such is an irreducible and causally active component of reality all in its own right, in general—should go much farther to guard against these overwhelming psychological tendencies. To properly understand what interactionist dualism entails, we must guard at all times against the tendency to revert in habit back to depicting irreducibly qualitative, subjective, and intentionalistic consciousness’ interactions with the structures of the physical world by such close analogy with mechanical interactions between physical structures within the physical world. And as Eccles unfortunately slips back just far enough into this habit enough to make his explanations easily amenable to this very rampant confusion, the confusion is carried over by Chalmers who now proceeds to take the structures Eccles has used to represent the fundamentally non–structural and point out that we could have these same structures perform their structural work without their needing to be conscious at all—and this is how Chalmers ends up with the confused and mistaken conclusion that dualism does not truly provide an “out” from the threat of epiphenomenalism created by combining the premise of causal closure of the structurally depictable physical dimensions of reality with the premise of consciousness’ irreducibility to those physical structures.

Chalmers writes: “Imagine (with Eccles) that ‘psychons’ in the nonphysical mind push around physical processes in the brain, and that psychons are the seat of experience.” Already, Chalmers supposes that “psychons” are first and foremost structural entities which “push” in virtue of their structural properties—and happen to be “the seat of experience” only as a secondary coincidence. The picture Chalmers is getting—not unreasonably, given the unfortunate way Eccles chose to represent it—is that a “psychon” is really a structural sort of thing that has causal dispositions in virtue of its structure, with conscious properties somehow tagging along as an “extra” to the mechanical processes they physically engage in. Indeed, on this picture, a “dualism” of this kind would have no virtues whatsoever over materialism: why not tack those conscious properties onto material structures instead of whatever these extra ghostly structures are? But this is unequivocally the wrong way to think about interactionist dualism. Interactionism is not the idea that there is some special kind of “stuff” that does what it does in virtue of the structural kind of “stuff” that it is but which then just happens to have conscious experiences tacked onto it as a secondary coincidence—where it would have otherwise been the same basic kind of “stuff” had we taken it and had these secondary coincidental experiential properties removed. Interactionism is the idea that conscious experience itself is the “special stuff,” and that the “stuff” that is conscious experience itself interacts with the rest of the physical world in a distinct way that is nonetheless every bit as unanalyzable and basic as mechanical causation between purely structural entities themselves—remove the properties of experientiality from consciousness, and you don’t have a structural kind of “stuff” left minus a certain extra tacked on property—nothing is left at all because consciousness just is the phenomena of subjective experience. And while it appears that Eccles understands that this is not the way to think about it and he warns against taking his language as implying the suggestion that we should, his language is nonetheless incredibly hard to take as adding anything new to the picture except that very suggestion. The point that should be emphasized is that consciousness is not analyzable in terms of structure; and that it is consciousness itself—defined by the essential, irreducible properties of qualitative subjectivity and intentionality—which is causally relevant.

This is again exactly the core point addressed in the previous entry — “The Nature of Scientific “Explanation” and the Interaction “Problem”: Descartes, in addressing his opponents’ objections to idea that consciousness and the (rest of the) physical world could possibly interact, in principle, if they were different in anything like the way Descartes suggested they were, responded that the problem “arise[s] merely from [critics’] wishing to subject to the scrutiny of the imagination matters which, by their own nature, do not fall under it.” If consciousness itself is a “fundamental” entity unanalyzable in any other more fundamental terms, then conscious–physical interaction is just as ultimately unanalyzable in any but its own category—in principle—as physical–to–physical causation is at the bottom line. As James Moreland was also quoted as saying in that essay, “One can ask how turning the key starts a car because there is an intermediate electrical system between the key and the car’s running engine that is the means by which turning the key causes the engine to start. The “how” question is a request to describe that intermediate mechanism. But the interaction between [consciousness] and [the brain] may be … direct and immediate. [And if] there is no intervening mechanism, [then] a “how” question describing that mechanism does not even arise”—just as it would not arise—and could not be answerable even in principle—for us to ask something like “How does pushing the gas pedal cause it to move?” The relationship between pushing and being pushed is simply one of the most basic fundamental terms in which physical causation takes place—and this relationship is simply unanalyzable in terms of anything more basic than itself. Yet notice that when we speak of the “intervening mechanism[s]” of the electrical system structurally mediating the structural relationship between the key and the car’s running engine, every single one of our most ultimate terms will involve direct and immediate, unmediated, interactions at each step—each ingredient of our “explanation” of the intermediate steps of causation between the key and the car’s running engine will, individually, be as unanalyzable themselves as the question of “how pushing the gas pedal causes it to move.” At the core of every sort of “explanation” that we are actually capable of are terms which themselves simply cannot in principle be “explained.” The interactionist suggestion is therefore simply that mental–to–physical causation is at the bottom line simply one of the “bedrock and thus unexplainable” kinds of events that take place in the world, rather than one of the kinds which are “secondary and derivative from other bedrock terms and thus explainable through reduction to those other, ultimate terms”.

The interactionist dualist suggests that interaction between irreducible consciousness and physical structure is every bit as “basic” and direct and therefore unanalyzable in terms of anything more fundamental as the most basic ingredients of the terms of explanation of mechanical interactions between physical structures. Our inability to analyze or “explain” the nature of irreducibly conscious interaction is not like an inability to explain how a key causally connects to a car engine—it is analogous to our inability to analyze or “explain” the most basic and irreducible terms of physical causation itself, such as how the mass of an object of a given size and density causes the surrounding spacetime fabric to curve, or how an object’s velocity at a given moment causes its velocity in the very next. The only answer that is possible even in principle for these questions is to simply accept that the very terms themselves are primitive and irreducible. The dualist suggestion is thus that my irreducibly conscious intention to move my hand causes the neurological process which results in my hand’s movement in just as basic and unanalyzable a way as the other examples just given here—and this is how dualism escapes the threat of epiphenomenalism. Eccles, by speaking as if consciousness interacts with the brain through intermediating mechanisms that can be diagramatically visualized as physical structures—even if that is not what he meant—obscures the incredibly overwhelming significance of this basic and core fundamental point, and it is understandable why Chalmers ends up confused. In other words, Eccles rather inadvertently pushes the problem back a step: rather than consciousness–as–such interacting directly with dendrons, if this interaction is mediated through psychons that can be depicted in any structural sort of way, now we just push the issue back to consciousness–as–such interacting directly with psychons to achieve the effects which psychons are capable of producing on dendrons. Why include “psychons” in the middle of the picture at all, except to get around the fact that we can’t directly visualize mental–to–physical interaction in principle (which should be precisely the crucial point the dualist should have to emphasize in the first place)?!

Thus, Chalmers writes that “We can tell a story about the causal relations between psychons and physical processes, and a story about the causal dynamics among psychons, without ever invoking the fact that psychons have phenomenal properties. … It follows that the fact that psychons are the seat of experience plays no essential role in a causal explanation, and that even in this picture experience is explanatorily irrelevant.” No. We could only tell a story about “psychons” —and actually be describing “psychons”—if “psychons” were in fact performing their causal relations with physical processes in virtue of their structural properties. (And while I have objected above that that is the impression that Eccles indirectly went a long way to feed, it is not what he was actually trying to say).  But if the phenomenal properties themselves are causally active, then our so–called “story” simply is not a description of reality itself—and it is therefore no “explanation” in any meaningful sense of the word. Eccles’ unfortunate way of speaking implicitly lends itself to Chalmers’ faulty interpretation. Correcting it to throw out Eccles’ useless and misleading neologism, the entire point of the dualistic position is to say that consciousness is not a fundamentally structural phenomena which simply happens to possess “phenomenal properties” as if by accident—“consciousness” itself fundamentally is those very “properties.” Consciousness itself is just exactly the very basic phenomena of experientiality itself. Consciousness—experience itself—is what is playing the causal role, if dualism is right.

That avoids epiphenomenalism in a most obvious way that it is inexcusably purblind not to see. The premise that the structural and mathematically and spatially definable (e.g., “physical”) aspects of reality are “causally closed” combined with the premise that phenomenal and intentionalistic consciousness is neither identical to nor “composed of” mathematical and spatial structures specifically leads to epiphenomenalism by modus ponens. Furthermore, this poses an absolutely insuperable problem for Chalmers’ view of consciousness for a reason Chalmers himself explicitly acknowledges. And it is unbearably obvious, when looking at that reason, why eliminating the premise of causal closure of the “physical” eliminates the entailed conclusion of epiphenomenalism. Again, against Chalmers’ attempt to use a meaningless notion of “explanatory irrelevance” to escape the claim that his view faces the problem that it peculiarly winds up in epiphenomenalism, epiphenomenalism does specifically threaten Chalmers’ view in particular, because it follows by modus ponens from the combination of the conclusion that consciousness as such is irreducible to physical structure with the premise that interaction between physical structures is “causally closed.” And this problem ends up absolutely slicing the legs off of all of Chalmers’ further proposal from here. The only valid option that Chalmers (or anyone who follows his reasoning up to here) is left with is either to go back on all of the antireductionist arguments that got us here first place and become reductionists or eliminativists of some kind, or else drop “causal closure.”

In The Conscious Mind: In Search of a Fundamental Theory, Chalmers writes (pp.216–217): “[There are] constraints … in generating a theory of consciousness. The most obvious is the principle we rely on whenever we take someone’s verbal report as an indicator of their conscious experience: that people’s reports concerning their experiences by and large accurately reflect the contents of their experiences. … If the principle turned out to be entirely false, all bets would be off: in that case, the world would simply be an unreasonable place, and a theory of consciousness would be beyond us. In developing any sort of theory, we assume that the world is a reasonable place, where planets do not suddenly pop into existence with fossil records fully formed, and where complex laws are not jury–rigged to reproduce the predictions of simpler ones. Otherwise, anything goes.”

But now recall the central core of my argument against epiphenomenalism in entry (IV) of this serieswhich was precisely the fact that it would render us incapable of ever talking about the qualitative properties of consciousness as such, in principle!

Quote: “In Jaegwon’s words, what the principle states is that: “if we trace the causal ancestry of a physical event we need never go outside the physical domain.” What Jaegwon Kim realized was that if we combine this claim with the realization that subjective experience can’t be reduced to or accounted for in terms of physical mechanism,  then we end up with a description of reality known as epiphenomenalism, on which—roughly—experiences more or less dangle off the edges of the world before simply falling off (I’ll explain this more in a minute). Jaegwon’s description of the state of play was thus that the choices are to either claim that subjective experience can be reduced to physical description (which is what he had, by then, saw the same compelling reasons to reject which I am outlining here), reject the principle of causal closure, or else accept epiphenomenalism….

… One of the easiest ways to explain an epiphenomenalist relationship is by example. If you stand in front of a mirror and jump up and down, your reflection is an epiphenomena of your actual body. What this means is that yourbody’s jump is what causes your reflection to appear to jump—your body’s jump is what causes your real body to fall—and your body’s fall is what causes your reflection to fall. It may seem to be the case that your reflection’s apparent jump is what causes your reflection to appear to fall, but this is purely an illusion: your reflection doesn’t cause anything in this story; not even its own future states. …

If epiphenomenalism were true, no one would ever be able to write about it. In fact: no one would ever be able to write—nor think—about consciousness in general. No one would ever once in the history of universe have had a single thought about a single one of the questions posed by philosophy of mind. Not a single philosophical position on the nature of consciousness, epiphenomenalist or otherwise, would ever have been defined, believed, or defended by anyone. No one would even be able to think about the fact that conscious experiences exist.

And the reason for that, in retrospect, is quite plain to see: on epiphenomenalism, our thoughts are produced by our physical brains. But our physical brains, in and of themselves, are just machines—our conscious experiences exist, as it were in effect, within another realm, where they are blocked off from having any causal influence on anything whatsoever (even including the other mental states existing within their realm, because it is some physical state which determines every single one of those). But this means that our conscious experiences can never make any sort of causal contact with the brains which produce all our conscious thoughts in the first place. And thus, our brains would have absolutely no capacity to formulate any conception whatsoever of their existence—and since all conscious thoughts are created by brains, we would never experience any conscious thoughts about consciousness. For another diagram, if we represent causality with arrows, causal closure with parentheses, physical events with the letter P and experiences with the letter e, the world would look something like this:

… e1 ⇠ (((P⇆P))) ⇢ e2 …

Everything that happens within the physical world—illustrated by (((P⇆P)))—would be wholly and fully kept and contained within the physical world, where conscious experiences as such do not reside; the physical world is Thomas Huxley’s train which moves whether the whistle on top blows steam or not. And e1 and e2 float off of the physical world—for whatever reason—and then merely dissipate into nothingness like steam, with no capacity in principle for making any causal inroads back into the physical dimension of reality whatsoever. This follows straightforwardly as an inescapable conclusion of the very premises which epiphenomenalism defines itself by. But since the very brains which produce all our experienced thoughts are contained within (((P⇆P))), in order to have any experienced thought about conscious experience itself, these (per epiphenomenalism) would have to be the epiphenomenal byproducts of a brain state that is somehow reflective or indicative of conscious experience. But brain states, again because per epiphenomenalism they belong to the self–contained world inside (((P⇆P))) where no experiences as such exist, are absolutely incapable in principle of doing this.

To refer back to our original analogy whereby epiphenomenalism was described by the illustration of a person jumping up and down in front of a mirror, then: it would be as if the mirror our brains were jumping up and down in front of were shielded inside of a black hole in a hidden dimension we couldn’t see. Our real bodies [by analogy, our physical brains] would never be able to see anything happening inside that mirror. And therefore, they would never be able to think about it or talk about it. And therefore, we would never see our reflections [by analogy, our consciously experienced minds] thinking or talking about the existence of reflections, because our reflections could only do that if our real bodies were doing that, and there would be absolutely no way in principle that our real bodies ever could.

The fact that we do this, then—the fact that we do think about consciousness as such, and the fact that we write volumes and volumes and volumes and volumes philosophizing about it, and the very fact that we produce theories (including epiphenomenalism itself) about its relation to the physical world in the first place—proves absolutely that whatever the mechanism may be, conscious experiences somehow most absolutely do in fact have causal influence over the world. What we have here is a rare example of a refutation that proceeds solely from the premises of the position itself, and demonstrates an internal inconsistency.

But Jaegwon Kim has identified the possible options for us: either experiences and physical events are just literally identical (which even Kim himself rejects, for good reasons we have outlined here), or else epiphenomenalism is true (which Jaegwon Kim accepts, but which the simple argument outlined just now renders completely inadmissible)—or else the postulate of the causal closure of the physical domain is false—and conscious experience is both irreducible to and incapable of being explained in terms of blind physical mechanisms, and possesses unique causal efficacy over reality all in its own right.”

It should be too obvious to need stating that supposing that consciousness itself is a causally active phenomena in the world irreducible to any others avoids the conclusion that consciousness is causally inactive (which is exactly all that is meant by the term “epiphenomenalism”). Yet, Chalmers wants to retain the principle of causal closure: “[On] the dualist view I advocate … causal closure of the physical is preserved; physics, chemistry, neuroscience, and cognitive science can proceed as usual. In their own domains, the physical sciences are entirely successful. They explain physical phenomena admirably; they simply fail to explain conscious experience.” In other words, explanations of what happens in the world—according to Chalmers—are absolutely complete without any reference to conscious experience. This can only be true if consciousness is causally inactive and therefore explanatorily irrelevant.

To avoid this implication, Chalmers wants to say that epiphenomenalism isn’t a special problem for his view in particular because the dualist faces it too. But to defend this suggestion, he says something absolutely witless: that consciousness is “explanatorily irrelevant” on the dualist’s account as well, because we can make up a story of things that doesn’t make reference to consciousness. But the difference is whether that story is correct, or not. On the dualist’s account, any such account will be false because on the dualist’s account, consciousness–as–such is a phenomena that does play a direct role in “what happens”. But on Chalmers’ account, the point is that the account that makes no reference to consciousness will be true. And that is the only difference that means anything. If by “explanation” we mean an account that actually explains why things actually do what they actually do, then if consciousness is—per interactionism—causally relevant, then it is not “explanatorily relevant”—whereas, for Chalmers, so long as he continues to holds on (as I contend he shouldn’t) to the principle of causal closure of the physical and insists (as I contend he should) that consciousness is irreducible to any explanation in terms of physical structure, epiphenomenalism follows by strict modus ponens. 

Epiphenomenalism is an absolute non–starter—not only for the independent reason explained above (that it is refuted by the fact that we demonstrably do think and talk about consciousness–as–such), but specifically against Chalmers’ own view for a specific reason he identifies: “The most obvious” assumption needed for Chalmers’ proposal for a theory of consciousness to be even slightly imaginable “is the principle we rely on whenever we take someone’s verbal report as an indicator of their conscious experience: that people’s reports concerning their experiences by and large accurately reflect the contents of their experiences.” If epiphenomenalism is true, then no one is ever capable of giving a single accurate report of their conscious experiences as such. Chalmers is committed to epiphenomenalism so long as he commits to both the irreducibility of consciousness to the “physical” properties of reality, and to the premise that those “physical” properties are “causally closed” with respect to each other. And interactionism is the way out, unless Chalmers wants to take back his arguments (which I consider resoundingly successful) against “reductive” accounts of conscious experience which try to “identify” it (one way or another) with something other than conscious experience itself—but these are the very arguments he has made his name by (and again, I whole–heartedly endorse them).

In Chapter 5 of The Conscious Mind, Chalmers actually takes these issues on directly. There are, he writes, four relevant premises to consider: “1. Conscious experience exists. 2. Conscious experience is not logically supervenient on the physical (e.g. “identical to” or “emergent from” “physical” facts about spatial–structural relationships). 3. If there are phenomena that are not logically supervenient on the physical facts, then materialism is false. 4. The physical domain is causally closed.” He explains that his own view results from the acceptance of all four premises. Again, we have already seen that this is the combination of premises which result in epiphenomenalism by modus ponens. However, in this section, he presents a subtly more developed argument for why the dualist fares no better than the materialist. Again, he repeats himself: “The deepest reason to reject [interactionist dualism as an approach to resolving the conflict between causal closure of the physical and the irreducibility of consciousness to the physical] is that [it] ultimately suffer[s] from the same problem as a more standard physics: the phenomenal component can be coherently subtracted from the causal component. On the interactionist view, we have seen that even if the nonphysical entities have a phenomenal aspect, we can coherently imagine subtracting the phenomenal component, leaving a purely causal/dynamic story characterizing the interaction and behavior of the relevant entities.” Of course, we have also seen why this is wrong: the ability to create “a story” doesn’t entail that that story would actually be correct. 

This time, however, Chalmers addresses this type of response, and proposes a counter–argument (which I have partially addressed already, but will now proceed to do so in more detail): “Various moves can be made in reply, but each of these moves can also be made on the standard physical story. For example, perhaps the abstract dynamics misses the fact that the nonphysical stuff in the interactionist story is intrinsically phenomenal, so that phenomenal properties are deeply involved in the causal network. But equally, perhaps the abstract dynamics of physics misses the fact that its basic entities are intrinsically phenomenal (physics characterizes them only extrinsically, after all), and the upshot would be the same. Either way, we have the same kind of explanatory irrelevance of the intrinsic phenomenal properties to the causal/dynamic story. The move to interactionism … therefore does not solve any problems inherent in the property dualism I advocate.” This would be a viable argument—if panprotopsychism were otherwise a viable choice. But it isn’t—for entirely different reasons altogether.

  _______ ~.::[༒]::.~ _______


In Chalmers’ exploration of the options, after rejecting the reductionist approaches altogether (with arguments I wholeheartedly endorse), he reaches the conclusion that “the best options for a nonreductionist are type–D dualism, type–E dualism, or type–F monism: that is, interactionism, epiphenomenalism, or panprotopsychism.”

We’ve seen already why epiphenomenalism must, necessarily, fail. (I’ll be deducing additional reasons for this failure using Chalmers’ own premises against each other shortly—and proceeding from there to apply the same argument to an even more radical and significant conclusion.) Panprotopsychism, on the other hand, is the idea that consciousness as we experience it in the human mind emerges not from the geometric–structural and spatial–relational mechanically–causally disposing properties of physical entities, but rather from some other intrinsic properties of these entities. Now, the “pan” in “panprotopsychism” means “everywhere.” The “psychism” means “consciousness.” So “panpsychism” means “consciousness everywhere.” But “proto” means something like “precursor.” Thus, there is a distinction in theory between panpsychism and panprotopsychism. Where panpsychism says that consciousness itself must be everywhere, panprotopsychism says that merely the “precursors” of consciousness must be everywhere.

But what are the options for what these “precursor” properties might be? To return yet again to my essay (IV) from this series, “Now, the plainest thing in the world to see is that the question of whether something is an experience or not is absolutely binary: the answer is either “yes” or “no,” and there are absolutely no steps in–between the two. The question of when a pile of sand goes from being a “heap” of sand to becoming a “mountain,” for example, is one that has rough edges: at exactly which point in the process of removing singular grains of sand from a “mountain” has it devolved into a “heap?” At exactly which point in the process of adding singular grains of sand to a “heap” does it become a “mountain?” Reasonable people could disagree, and there is no objective way to determine the answer. Some questions are like this: the question of when a new “species” has evolved has rough edges, and evolution can address the transition from one species to another through the small, gradual steps that are involved without needing to bridge any fundamental gap of absolute difference between an original “species” and a second. But the question of conscious experience is not like this—the difference between something being a subjective experience and something not being a subjective experience is as absolute as absolute can get. There may be various degrees of complexity or sensitivity or detail between experiences, but either something is an experience or it isn’t.

There is no middle ground between the two—but this also means there is no ground that can be covered in any gradual steps as a means of bridging the gaps between the two. And there is, therefore, no way to proceed gradually in steps from non–experience to experience. The move from non–experience to experience, if it happens, could only happen as an extraordinary leap across galaxies which happens all in one sudden and dramatic inexplicable move. Leibniz first and most clearly described the problem inherent in this on the record in 1714: “It must be confessed, moreover, that perception, and that which depends on it, are inexplicable by mechanical causes, that is, by figures and motions, And, supposing that there were a mechanism so constructed as to think, feel and have perception, we might enter it as into a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception.” But the “pieces which push one another” that describe Leibniz’ mill are just exactly what describe the essence of the physical entities accepted as the (and the only) basic building blocks of the universe by physicalists—and gradual, almost imperceptible additions of singular (and mechanical) grains of sand at a time are exactly the way evolutionary accounts perform their explanatory work (and the only way that they can).”

The only option for what these “intrinsic properties” might consist of is therefore consciousness itself. Thus, there is no valid way to formulate any working theory of what “panprotopsychism” could possibly mean except for it to boil straight down into fullblown panpsychism where conscious experiences which are actually, literally experienced actually, literally exist everywhere. Yet, if we take the panpsychist approach to consciousness, then as I explain in my entry (VII) to this series, we simply end up running the same process of reasoning against the relationship between the subjective mental properties and the objective structural properties of the microphysical entities proposed to have conscious experiences on panpsychism that we ran against the relationship between the subjective mental properties of the human mind and the objective structural properties of the human brain to arrive at panpsychism as a potential solution to begin with: and this time panpsychism doesn’t exist as a way out to the problems—only dualism does.

As I explain in entry (VII), the panpsychist cannot say that the subjective mental properties of microphysical entities are “identical to” the objective structural properties of the microphysical entities, for reasons the panpsychist himself already accepts. But even if he did, he would precisely undermine his own reasons for rejecting the ordinary mind–brain “identity theory” in the first place—and the panpsychist suggests panpsychism precisely as a solution to that failure. So the ultimate nature of everything must be either physical or mental (with the other standing in some extraneous relationship to the first). But the panpsychist cannot say that everything at root is mental—because this simply recreates the “Hard Problem of Consciousness” in reverse: namely, this would now leave us having to ask: how do we get physical properties from the mental? And this is every bit as incoherent as trying to imagine how we could get the subjectively experienced qualitative taste of strawberries from blind senseless particles bouncing around in some complex combination. But even if the panpsychist bites this bullet, then once again he would lose any reason to propose panpsychism as a solution to the materialist Hard Problem of Consciousness in the first place. Finally, the panpsychist can adopt the kind of panpsychism which Chalmers flirts with and suppose that the world is composed of universally “physical” entities which possess “mental” properties “on the side,” but this leads to epiphenomenalism. And it turns out that there is no way to be a panpsychist without committing to a premise that one would precisely need to reject in order not to be a reductionist of some kind in the first place. Hence, “panprotopsychism” is out too.

Of Chalmers’ three most promising options—dualism, epiphenomenalism, and panprotopsychism—only dualism is left. And interactionism actually does avoid the problems that arise on panpsychism (the only possibly viable physicalist competitor to interactionism, and the position which his defender of “the standard physical story” necessarily ends up in given the lack of sensible steps between the strict, sharp binary line between something’s either being an experience of some kind or not): it has no need to incoherently try to derive the physical–structural from the qualitative–experiential (as the idealist panpsychist would). It has no need to incoherently try to suppose that the objective/structural and subjective/experiential properties of reality are literally identical with each other (as either the “identity theorist” or the “identity theorist”–panpsychist would). And it does not modus ponens itself into epiphenomenalism—as the “property dualist” and “property dualist”–panpsychist do. Panpsychism still ends up demonstrating why the objective structural and subjective experiential properties of reality (a) cannot be supposed to be either identical to or composed of the other and (b) must necessarily interact in a direct, unmediated, and “basic” (although mysterious) way. So Chalmers’ attempt at a counter–argument to the point that interactionist dualism actually would avoid epiphenomenalism—in a way that someone sticking to “the standard physical story” can’t—because dualism would hold no virtues over panpsychism—fails, because there are entirely other reasons aside for panpsychism’s failure (although they turn out surprisingly upon reflection to be just the same exact reasons why the materialist perspectives which panpsychism proposed itself as a solution to failed in the first place). Interactionism remains the only way out of this otherwise inescapable loophole of logical quagmire, and we have arrived at it by what I previously, in an early entry called “a logical, rational, piecemeal divide–and–conquer process of elimination.” 

And with how all these other arguments interrelate established, I can move on to the more important point of this entry. First, I will adapt the points made in the first section of this entry to show yet another reason why Chalmers’ particular way of trying to make dualism “naturalistic” fails—and yet another reason why interactionist dualism solves the problems inherent in Chalmers’ attempt at an account. Then, I will apply those same points to a slightly separate issue and draw a much more radical and surprising conclusion from relatively simple premises.

_______ ~.::[༒]::.~ _______


Chalmers attempts to make his dualism “naturalistic” by talking about “laws.” On Chalmers’ account, “psychophysical” laws exist in the same way that “physical laws” do. The conclusion of arguments establishing that consciousness cannot be reduced to unconscious processes is that consciousness itself is a fundamental bedrock ingredient of the Universe. Up to this point, I agree—but there is a deep and significant problem with how Chalmers tries to characterize this.

Consciousness, as both Chalmers and I agree, is fundamental. What Chalmers says next is that: “Where we have new fundamental properties, we also have new fundamental laws. Here the fundamental laws will be psychophysical laws, specifying how phenomenal properties depend on physical properties. These laws will not interfere with physical laws; physical laws already form a closed system. Instead, they will be supervenience laws, telling us how experience arises from physical processes. We have seen that the dependence of experience on the physical cannot be derived from physical laws, so any final theory must include laws of this variety….” We’ve already addressed the fact that supposing that these laws “will not interfere with physical laws” locks consciousness–as–such out of the causal nexus—and any premise which entails this result simply refutes itself. But that is not the issue I am concerned with here—the issue I am concerned with here revolves instead around how we are supposed to characterize these “laws”.

Before getting into that, however, let’s take a closer look at what Chalmers says about them in The Conscious Mind. Describing the epistemological process by which we come to understand the world, he writes: “At first, I have only facts about my conscious experience. From here, I infer facts about middle–sized objects in the world, and eventually microphysical facts. From regularities in these facts, I infer physical laws, and therefore further physical facts. From regularities between my conscious experience and physical facts, I infer psychophysical laws, and therefore facts about conscious experience in others. I seem to have taken the abductive process as far as it can go, so I hypothesize: that’s all.” I find no problem in this paragraph. “At first, I have only facts about my conscious experience”—absolutely. And the microphysical facts supposed by the materialist to constitute everything are an inference from this primary fact, not something ever primarily known—“From here, I infer facts about … objects … and eventually microphysical facts…”—absolutely. But is inferring from physical regularities to the existence of “physical laws” the same thing as inferring from regularities in conscious experiences to the existence of “psychophysical laws?” That depends—as we will soon see—on what it means to call something a “law.” It is time to recall our earlier discussion in section 1.

Chalmers’ goal is to prove that “Even if consciousness cannot be reductively explained, there can still be a theory of consciousness. We simply need to move to a nonreductive theory instead.” His unique idea is that “We can give up on the project of trying to explain the existence of consciousness wholly in terms of something more basic, and instead admit it as fundamental, giving an account of how it relates to everything else in the world.” And this project will be “naturalistic” because “Such a theory will be similar in kind to the theories that physics gives us of matter, of motion, or of space and time.” How so? Because “Physical theories do not derive the existence of these features from anything more basic, but they still give substantial, detailed accounts of these features and of how they interrelate, with the result that we have satisfying explanations of many specific phenomena involving mass, space, and time.” So far, so good—and this description so far applies to the approach I have defended in this series. But the following sentence is where the central problem begins to show itself: Chalmers believes that ”they do this by giving a simple, powerful set of laws involving the various features, from which all sorts of specific phenomena follow as a consequence.”

Thus, “By analogy, the cornerstone of a theory of consciousness will be a set of psychophysical laws governing the relationship between consciousness and physical systems. … Given the physical facts about a system, such laws will enable us to infer what sort of conscious experience will be associated with the system, if any. These laws will be on a par with the laws of physics as part of the basic furniture of the universe. It follows that while this theory will not explain the existence of consciousness in the sense of telling us ”why consciousness exists,” it will be able to explain specific instances of consciousness, in terms of the underlying physical structure and the psychophysical laws. Again, this is analogous to explanation in physics, which accounts for why specific instances of matter or motion have the character they do by invoking general underlying principles in combination with certain local properties. … There need be nothing especially supernatural about these laws. They are part of the basic furniture of nature, just as the laws of physics are. There will be something “brute” about them, it is true. At some level, the laws will have to be taken as true and not further explained. But the same holds in physics: the ultimate laws of nature will always at some point seem arbitrary. It is this that makes them laws of nature rather than laws of logic.” Finally, “ … For a final theory, we need a set of psychophysical laws analogous to fundamental laws in physics. These fundamental (or basic) laws will be cast at a level connecting basic properties of experience with simple features of the physical world. … When combined with the physical facts about a system, they should enable us to perfectly predict the phenomenal facts about the system. … Once we have a set of fundamental physical and psychophysical laws, we may in a certain sense understand the basic structure of the universe.”

It should be obvious how closely many of these statements correspond to what I have said in my own recent discussion of the “conceptual” interaction problem in entry (X) of this series—but I think there is something tremendously significant that Chalmers has handled here in a very poor and unclear way. This brings us now to the central core of the question Chalmers has handled inadequately: would “psychophysical laws” truly be analogous to the physical laws whose existence we all accept? That depends on how we characterize what it means for something to be a “law”—on what it means fora law” to be “part of the basic furniture of nature.” In what sense do we accept that these other “laws” exist to begin with? Note what Chalmers says: “[Psychophysical laws] are part of the basic furniture of nature, just as the laws of physics are.” Is he right? The statement that physicallaws” are part of the basic furniture of nature” is not even explicitly stated here, but it is an absolutely crucial assumption underlying the analogy. What does it mean to say that “physical laws” are part of the basic furniture of nature? 

Recall what we saw in the first section of this entry: theists are well–known for making the argument that if certain kinds of “laws” exist, there would have to be a “lawmaker”—God—to account for their existence. The “vital distinction” is between “a causal law, i.e. one that rules the genesis of events, and an empirical law, one that merely registers their occurrence.” The theist argues that the “laws” of nature are more than our mere recordings of what things do—that they somehow instead “rule the genesis of events” in their own right.  And what is the atheist response? The atheist doesn’t deny that “laws” of this sort would entail a “lawmaker.” The atheist rather denies that “laws” of this sort are the kinds of “laws” that our world actually has—and says that when we derive “the laws of nature,” we aren’t discovering some independent thing called a “law” that is actually “governing” the behavior of the universe; we are, instead, merely creating descriptions of what things in the world do in and of themselves, in virtue of their own intrinsic traits. So as we saw in, for example, Bede Rundle’s critique of the argument, ““the law” is in no sense instrumental in bringing about accord with it. … What would God have to do to ensure that atoms, say, behave the way they do? Simply create them to be as they in fact are.  Atoms having just those features which we currently appeal to in explaining the relevant behaviour, it does not in addition require that God formulate a law prescribing that behaviour.” But contrast this with how Chalmers himself characterizes the nature of his “psychophysical laws!”  “We can use Kripke’s image here [to illustrate what the situation is like, if dualism is true]. When God created the world, after ensuring that the physical facts held, he had more work to do! He had to ensure that facts about consciousness held. The possibility of zombie worlds or inverted worlds shows that he had a choice. The world might have lacked experience, or it might have contained different experiences, even if all the physical facts had been the same. To ensure that the facts about consciousness are as they are, further features had to be included in the world.”

If we accept Chalmers’ arguments that consciousness is irreducible to anything other than itself and is therefore fundamental in its own right, then we are precisely postulating the existence of Chalmersian “psychophysical laws” because we need the “law” to do something that the actual things in the world do not inherently do in and of themselves by virtue of their own intrinsic traits—but it is exactly for this very reason that the existence of any such “law” is necessarily impossible! Not, at least, so long as we reject the “panprotopsychist” approach, in which the emergence of consciousness would be explained by the “intrinsic properties” of “physical” objects rather than their structural/relational properties, in which case the “laws” could be construed as describing what these actual properties of actual entities do in and of themselves—but I have explained why I think that option likewise absolutely fails in 5.

These are the kinds of “laws” that would exist as “causal laws … [which] rule the genesis of events…” and would therefore require an active power behind the physical world itself which “formulates such laws and ensures that the physical realm conforms to them….” Notice what Chalmers says in discussing “[the] worry … about how consciousness might have evolved on a dualist framework….” The worry, he writes, is that “a new element pop[s] into nature, as if by magic….” However, “ … this is not a problem.” Why? Because “Like the fundamental laws of physics, psychophysical laws are eternal, having existed since the beginning of time.” And he wraps his explanation up: “It may be that in the early stages of the universe there was nothing that satisfied the physical antecedents of the laws, and so no consciousness … In any case, as the universe developed, it came about that certain physical systems evolved that satisfied the relevant conditions. When these systems came into existence, conscious experience automatically accompanied them by virtue of the laws in question. Given that psychophysical laws exist and are timeless, as naturalistic dualism holds, the evolution of consciousness poses no special problem.”

But “the fundamental laws of physics” are not “eternal”—and this is not a way that a naturalist is epistemically allowed to think about the nature of “laws!” The “law of gravity” “exists” in the absolutely derivative sense in which “it” can be said to “exist” only just so long as the actual thing we call “space–time” exists with the actual properties residing within that actual thing by virtue of which it demonstrates the patterns of behavior which we label “the law of gravity.” But there is no “law of gravity” whenever a spacetime with those actual properties is not around—cue our quotation from Richard Carrier earlier;  “The “law” of gravity … ‘is’ in every place and time where the physical conditions that manifest gravity exist.”—the “law” of gravity is not an actual thing, but merely our labels—applied after the factto the actual thing. So it does not “exist” when the actual thing which intrinsically causes those behaviors to manifest does not exist. And if we suppose that it does, we have to face the argument that a “law” of this sort could only be imposed upon the universe from outside. So likewise, “psychophysical laws” in the purely descriptive–recording sense (the only sense in which the naturalist can accept that any kinds of “laws” exist at all) cannot exist so long as consciousness is not already around for us to describe. Not unless these are the kinds of “laws” which are fundamentally and categorically unlike the other “laws” of nature in that they represent a “governing power” all in their own right!

To explore this, for a moment, even further, I quote from John Foster’s “Regularities, Laws of Nature, and the Existence of God.” Foster writes of a characterization of what it means for something to be a “law” which arguable does not apply to ordinary physical phenomena in general. But where Foster writes of the law of gravitation (which could be construed as holding in virtue of the intrinsic properties which the actually existing structure of the actually existing spacetime fabric in our world actually has), I will substitute talk of psychophysical laws—and it will be clear that Foster’s theistic construal of the nature of laws which arguably does not apply to “laws” of nature in general does clearly apply to Chalmers’ formulation of the psychophysical laws: “A [psychophysical] law of nature is a fact of natural necessity—the necessity of [psychophysical relationships] being regular in a certain way. But in exactly what sense does the relevant regularity count as necessary? Well, the first thing that needs to be stressed is that the necessity involved is not a form of strict or absolute necessity. The claim that it is a [psychophysical] law of nature that … [say, the chemical properties of opiates bring about the subjective experience of qualitative “happiness”] … does not imply the absolute impossibility of cases in which this regularity fails: it does not imply that there are no possible situations, of any kind [e.g. in Chalmers’ “possible worlds”], in which [the “psychophysical law”–like relationship between consciousness and physical states of the brain]  does not behave this way… the [psychophysical] laws of nature (assuming they exist) are themselves only contingent. The law [which speciefis that conscious experiences appear under x physical conditions] holds, let us assume, in the actual world. But we can certainly envisage worlds in which it does not; and, in being able to envisage such worlds, we can also envisage worlds in which, in the absence of this law, the associated regularity does not obtain—worlds in which there are the same intrinsic types of matter as those which feature in the actual world, but in which [conscious experiences do not appear under these physical conditions]. … Being only contingent, [psychophysical] laws of nature are not forms of strict necessity. So in what sense are they forms of necessity at all? We want to say that [consciousness appears under x physical conditions because it has] to. But how are we to construe its having to if there are [possible worlds] where … [it doesn’t]?”

Foster writes here that giving a ‘naturalistic’ account of such “laws” is problematic because they are contingent—they “could have” been different. For a case like gravity, this argument can be undermined by saying that once the intrinsic nature of matter as it actually is in our actual world is what it actually is, and once the intrinsic nature of the fabric of spacetime in our actual world is what it actually is, there is nothing for a “law” to add to the picture. Therefore, no constraint is imposed on the world from outside—the “law” of gravity is fully explained by features contained fully within the actual entities within the actual world—and can therefore be wholly accounted for by giving an explanation of how matter and spacetime came to possess the actual intrinsic properties which they actually do within the world, with no additional explanation of how this “law” is imposed upon that world from without necessary. But the kinds of “laws” which Chalmers proposes here would require additional explanation after all the actual properties of all the actual things in the actual world have been described (and that is exactly the reason why Chalmers thinks he needs to posit them!) It is unbelievably ironic, in this context, that Chalmers borrows the vocabulary from Kripke of speaking about God having more work to do He writes, for instance: “In general, if B–properties [first–person subjective facts about qualitative experiences] are merely naturally supervenient on A–properties [third–person objective facts about physical states] in our world, then there could have been a world in which our A–facts held without the B–facts. As we saw before, once God fixed all the [facts about physics], in order to fix the [facts about qualitative conscious experiences] he had more work to do.” Indeed, these actually are exactly the kinds of “laws” that could only be “fixed” by an outside force demanding that the universe behave in this way and not that despite the entities already existing within the universe simply doing so in virtue of their own intrinsic traits—exactly the kind which have always led to the inference to a “lawmaker”—God—in traditional theistic thought.

_______ ~.::[༒]::.~ _______


But suppose you accept everything said up to here. And suppose, to try to come to terms with this scenario and remain as “naturalistic” as possible, you come to accept some kind of multiverse scenario on which a mechanism is responsible for bringing the contingent laws of nature about (or perhaps only the “psychophysical” laws since “laws of nature” in this sense still don’t exist at all)—with the particular form these laws end up taking being explained in terms of the details of whatever this mechanism is. Thus, we can have “laws” imposed on the world from outside—but this “outside” is, itself, another ultimately “physical” blind mechanism. Without considering the whole further extensive background of issues underlying this approach (even at 13,000+ words, I am striving for as much brevity as possible to get to the point!), allow me to assume for the sake of argument here that such an approach is otherwise plausible. The problem then would simply become pushed back yet another step—all the way to the origins of the universe itself.

Suppose we program a computer to blindly and randomly generate mathematical functions. With x’s and y’s and numerical values as the available ingredients on the left hand of the equations, we’ll get plenty of x’s and y’s and numerical values as “outputs” on the right hand of the equations. But once again, no equation that results from this kind of mechanistic process will ever produce anything like “x• y – 9(x + y) = {the subjective sensation of qualitative blue}”. How could it? The ingredients simply aren’t there in blind mechanism for specifying anything other than more blind mechanisms—and this is exactly the starting point which the “naturalistic dualist” took for his starting position to begin with, so it’s not a premise he can consistently just up and suddenly do away with here.

Per the “naturalistic dualist’s own starting premises,” this still can’t be a way out. Whatever mechanism of quantum fluctuations at the core of the multiverses generates universes, per the very premises the “naturalistic dualist” started out by accepting it would have to be capable of knowing what consciousness is in order to specify consciousness within its mechanisms—in other words, it would necessarily have to be conscious (if the anti–reductionist arguments succeed, then you can’t even infer the existence of experiential facts from any number of “physical” facts)—it would have to be more like God than a blind, unconscious quantum Singularity. Hence, even if we could accept a naturalism on which “laws” are truly “ordering powers” in their own right, imposed upon the Universe from without (in some quantum void at the center of the multiverse, say), a blind mechanism for the generation of these “laws” still couldn’t account for a “law” which specifies a relationship between otherwise blind mechanisms and something that is neither identical to nor composed of blind mechanisms at all. To make dualism properly “naturalistic” will require some other approach.

The naturalist can’t have “laws” of this sort at all; and yet, even if he could, he still couldn’t account for how the laws would specify consciousness in particular within their equations if the “lawmakers” were blind mechanisms lacking consciousness—for exactly the same reasons the anti–reductionist arguments pushed him to hypothesize the existence of “psychophysical laws” in the first place. Only God could account for the existence of “laws” of this kind of sort in the first place; but even if some sort of multiverse–generator could account for them, the “multiverse–generator” would still once again have to be something more like God than a mechanism to account for how “laws” of this general kind could possibly specify consciousness within their equations specifically.

_______ ~.::[༒]::.~ _______


There is however one incredibly ironic but simple approach which I think obviously works.

And that is to suppose that “psychophysical laws” are “empirical laws” which merely consist of us “registering the occurrence” of actual events in the world between actual entities, and not “causal laws” which “rule the genesis of events” and are actually causally “instrumental in bringing about accord with themselves.” How?

By supposing a substantialist view by which rather than the psychophysical law bringing it about that under x physical conditions, y phenomenal property is contributed to a conscious experience, “the psychophysical laws” are just our descriptions of what things do in and of themselves in virtue of their own intrinsic properties and traits—which necessarily has to entail that consciousness itself is already a fundamental thing within the world and the so–called “psychophysical laws” are not actual  “things” in the furniture of the world—consciousness itself is; the so–called “laws” are just our after–the–fact descriptions of what we observe of its behaviors in relation to the world.

Again, Bede Rundle: “Newton’s laws can describe the motion of a billiard ball, but it is the cue wielded by the billiard player that sets the ball moving, not the laws. The laws help us map the trajectory of the ball’s movement in the future (provided nothing external interferes), but they are powerless to move the ball, let alone bring it into existence.” Likewise, psychophysical laws can describe how consciousness (the subjective stream of intentionalistic thought and qualitative experiences) relates to the rest of the world, but these “laws” are merely our descriptions of consciousness itself—just as the “law” of gravity is merely our description of how the fabric of space–time itself actually behaves, not in virtue of ‘orders’ from some external law, but in virtue of the intrinsic, tangibly existent properties of the actually existent fabric of space–time itself. This clarification of the nature of “laws” can—ironically—save us from the entailment of inescapably theistic conclusions in the case of consciousness just as much as it can in the case of something like gravity. The only reasonable position can be that the “laws” are our description of what the thing which we call “consciousness” does—rather than the reverse, implied by Chalmers’ equivocations around the idea of “law,” that “consciousness” is our label for what the laws” somehow force to happen.

_______ ~.::[༒]::.~ _______


At least, that could work for what we might call the “day to day” operations of consciousness. Once a given stream of consciousness is already in place, we can say that a “law” is merely our description after the fact of how “the conscious self and its brain” themselves interact in virtue of what they themselves intrinsically are. But this brings us back to why the problem Chalmers responded to earlier—“how [might] consciousness [have] evolved on a dualist framework…[?]”—was problematic to begin with: “did a new element suddenly pop into nature, as if by magic?” The problem should now be somewhat more obvious: if the Hard Problem is real, then consciousness is neither “identical to” nor “composed of” (a relationship which could also be labeled with phrases like “emergent from” or “reducible to” which appear to describe a different kind of relationship, but do not; and merely label the same relationship somewhat differently) mechanical procedures which lack consciousness. If that is so, then nothing about the evolution of these mechanical procedures themselves can account for why the entire stream of consciousness suddenly “pops” into being as a “new element … as if by magic.” Nothing happens, in any physical terms, during the series of blind mechanical events between a blind, unconscious sperm making contact with a blind, unconscious egg and carrying out a mechanical procedure by which DNA mechanically directs the program of building a physical input–output system which could possibly explain why subjective streams of qualitative experience suddenly appear—if Chalmers is correct up to here (and I think he is, and defend that position elsewhere), nothing about the mechanical programming of DNA as a structural, physical entity can explain why that mechanism in virtue of its mechanistic properties brings about a conscious being instead of a human zombie who is just as unconscious as the original sperm and egg, or the microparticles making these up, themselves—and that’s exactly why we thought we ought to consider postulating a new type of extra, additional “laws” to account for why consciousness does appear to begin with. Again, this follows from precisely the same anti–reductionist arguments which we took as our very starting point in the first place: just as the blind mechanism of particles coursing through space cannot be literally identical to or constitutively compose a first–person–subjective qualitative experience, so it follows from this very same exact starting premise that they cannot possibly be said to bring the entire stream of  first–person–subjective qualitative experiences into being in virtue of what they intrinsically are, in and of themselves—not unless, contra the core starting point of Chalmers’ entire position, conscious experiences and intentionality can be given a “deflationary” explanation and there is no “Hard Problem” after all.

But any “law” of the kind necessary to bring about subjective streams of qualitative experience (and intentionality) in this situation would, once again, be equivalent to a law that says “every time the 8 ball falls into the left corner pocket, an orange angel is born in a newly created Heaven”—it would propose to “connect” events which by all admission simply have no intrinsic connection in virtue of what the entities related by this “connection” actually are in and of themselves—by the very same exact premises that got us here in the first place. Recall Bede Rundle once again: “Newton’s laws can describe the motion of a billiard ball, but it is the cue wielded by the billiard player that sets the ball moving, not the laws. The laws help us map the trajectory of the ball’s movement in the future (provided nothing external interferes), but they are powerless to move the ball, let alone bring it into existence.” Or, if it does, then that can’t be the kind of “law” whose existence the “Naturalist’s” worldview can with any plausibility allow him to accept.

So it goes, likewise, for consciousness: we couldn’t possibly have that sort of “law” exist in such a way that it itself, as a “law,” as an actual thingruling the genesis of events” as an “operative power” in its own right, could suddenly bring streams of consciousness into existence under particular conditions (which in virtue of what they are in and of themselves have no intrinsic power to do such a thing at all (where this is precisely the reason we saw the need to posit this kind of “law” to begin with)), unless a conscious “lawmaker” who “formulates such laws and ensures that the [world] conforms to them” were our explanation for the origins of such a “law.”

However — recall the atheist response to the Kalām cosmological argument which I endorsed as by far the most plausible: what was our most effective option for avoiding the argument that an explanation for the origins of the physical universe would require God? We did it by denying that it is necessary for time to have had an origin—by denying that either cosmological science or philosophical paradoxes render the idea of an infinite past—whose existence we, notably, cannot “empirically” confirm!—invalid or implausible. The same approach is available here.

The same approach which the atheist plausibly takes in the context of the cosmological argument—making the inference to the empirically inconfirmable conclusion that the Universe’s temporal past (defining “Universe” in such a way that it may include much more than our four–dimensional region of it which appears to have formed at the moment of the Big Bang) must be eternal—can be used here to avoid the kinds of “laws” that only God could account for as postulates to account for the coming–into–existence of uniquely individual streams of experience.

What would the approach entail in this context? It would entail that when we speak of the “psychophysical laws,” in particular, which specify the conditions in which a unique stream of consciousness comes to beeven in these cases we still are simply creating descriptions and labels after the fact about how consciousness itself—in virtue of the actually existing traits of the actually existing phenomena of consciousness itself—inherently behaves.

In other words, it would have to follow that we are not identifying a “law” which, by its own “ruling” power, brings that conscious stream into being—but for any actually–existing consciousness to interact with the rest of the physical universe at the moment at which a given physical organism and stream of consciousness begin the process of interaction—the stream of consciousness itself would have to already pre–exist as an already actually–existent phenomena in order that it could be a thing whose behavior our “laws”—which are not antecedent powers but merely consequent descriptions of phenomena themselves—consequently describe. And notice, too, that the same implication would follow with respect to any “psychophysical law” which described the cessation of a stream of consciousness upon biological death of the brain: nothing in these physical events themselves could possibly intrinsically account for the cessation of that stream, any more than the 8 ball falling into the right corner pocket could intrinsically account for a blue angel in an alternate dimension dying. Thus, we could not possibly have the kind of law which would specify that the stream of consciousness must in fact cease at death, either—not without a “lawmaking” God.

If: •(1) Laws which truly “govern” rather than merely describing the behavior of Nature—particular when such laws are supposed to include subjective consciousness as parts of their equations—cannot exist without a conscious “Lawmaker”, And •(2) Consciousness (first–person subjective qualitative experiences and intentionalistic thought) cannot be reduced to blind physical mechanism (and panpsychism is not a way around the need to get consciousness from blind physical mechanism), then it follows from these two simple premises that: either (A) God exists, or else (B) the stream of consciousness is eternal. For my part, I cannot find any way to plausibly or coherently reject either (1) or (2). This leaves me in the rather awkward position of having to either take an option I find literally incoherent, or else face up to accepting either (A) or (B). Yet, ironically, it seems to me that (B) is where the only salvageable attempt to make dualism “naturalistic” inevitably ends up. Ironically, this turns out to be exactly the worldview mentioned inadvertently in some of the opening stages of this series—that held by Samkhya–type schools of Hinduism, or by the somewhat more personalist varieties of Buddhism, in which there is a kind of mind–body dualism which allows for the possibility of reincarnation—without theism. Is it possible these actually plausible worldviews?

Now that the groundwork has been laid, get ready for this series to finally start getting truly “crazy.”

Consciousness (X) — The Nature of Scientific “Explanation” and the “Problem” of Interaction, pt.1

[Note: Also a crude rough draft that eventually needs more refinement than this.]

When we say that scientific analysis of a given physical phenomena “explains” it, what is it that we mean? Let’s take the analysis of the behavior of “water” in terms of chemistry as a paradigmatic example. When we speak of the behavior of “water”, we have in mind such phenomena as the fact that an object of sufficient density placed on the surface of water, for example, will “sink.” Now, to give a scientific “explanation” of this phenomena is to say that molecules of H2O bond together quite loosely, so that when collections of molecules more tightly bound come into contact with it, these more tightly bound molecules are capable of slipping through the gaps of space between molecules of H2O. [1]

Now, how then are the properties of H2O to be explained?  Once again, the behavior of H2O must be “explained” in terms of the structural properties of the composing entities one level down: in this case, atoms. And how are the properties of atoms to be explained? Once again, … you get the picture. Physical explanations work, in essence, by ‘zooming in’ on any given phenomena and going one level of reality ‘down’. Eventually, however, it stands to reason that we’re going to have to find the “rock bottom”—the unit which is, finally, indivisible into anything other than itself. For some time, we thought that these would be atoms—“atoms” were given the name, in fact, precisely because they were believed, at first, to be indivisible. However, we later discovered that they were divisible into the particles we call protons, neutrons, and electrons—and with the advent of quantum physics, we discovered that these were even further divisible into the categories of particles known as fermions (a category which includes quarks) and bosons. Currently, fermions and bosons are regarded as “elementary particles,” which means that we don’t know yet whether they represent “rock bottom” or are composed of any more basic particles or not—but for now, until we find something more basic, we can only assume that they are, in fact, rock bottom.

But suppose, as we investigated the properties of water, it had turned out that atoms themselves were simply the “rock bottom”—that atoms themselves had in fact been the ultimate, most indivisible thing in existence. Would that be ‘weird?’ Suppose it turned out that it was H2O. Would we feel like there was some mystery left over that we simply hadn’t, and couldn’t yet, explain about the world? How long would that continue to ‘bug us’ before we would give up and accept that H2O is ‘as far down as it goes?’ We can go even further with this thought experiment: what if it had turned out that the “rock bottom” thing was simply water itself? To say that water itself was the “rock bottom” thing underlying the behavior of water would simply be to say that no matter how much you try to divide water up, all you would get are still yet more units of water. Suppose that had been what we discovered. Would we be unsatisfied? Would we feel that something was fundamentally missing in our ability to use scientific investigation to understand the roots of the properties of water? I think it’s obvious that we would. We would be missing the only kind of “explanation” that science as we know it allows! We would just have to take the way water acts completely for granted, and there would be nothing else left for us to say about it to elucidate anything about “why” it behaves as it does. That would leave us in a particular kind of state of ignorance.

Yet the point we have to realize is that a world in which our ultimate “explanation” of the behavior of water simply did in fact have to stop with water itself is ultimately no different from the world we are in. The notion that the fact that water divides into units of something other than “water” (molecules), and that these units (molecules) divide into units of something other than “molecules” (atoms), and that these units (atoms) divide into units of something other than “atoms” (protons, neutrons, electrons)—and so forth—makes any of it less ultimately inexplicable is simply an illusion. We don’t understand the ultimate nature of our world any more than we would have if water itself had simply turned out to be the most indivisible thing composing “water”—and at least so far as scientific investigation can take us, we never will. Science can explain the relationship between the properties that a given physical entity has and the properties of the component entities that make that first entity up—but whatever the ultimate properties are, they can’t, by definition, be “explained” in this same way. And this is just as true whether the most basic entities are quarks, atoms, or even “water” itself. In the world where the most indivisible units of water turn out to be water, “Why does ‘water’ do what it does?’ remains a mystery. In the world where it turns out to be the case that water divides into more primitive units, which we call molecules, “Why does ‘water’ do what it does?” acquires an answer—but only at the expense of leaving us with the question, “Why do molecules do what they do?” which now remains every bit as unanswerable as the question “Why does ‘water’ do what it does?” was in the last world. And in the world where it turns out to be the case that molecules are divisible into more primitive units, which we call atoms, “Why do molecules do what they do?” acquires an answer—but only at the expense of leaving us with the question, “Why do atoms do what they do?” which now remains every bit as unanswerable as the question “Why do molecules do what they do?” was in the last world, and as “Why does ‘water’ do what it does?” was in the world before it.

The point here is much deeper than the mere fact that we can keep asking “why?” indefinitely. The point is that we naively assume that scientific explanation gets us further into “understanding” reality than it actually does. The ultimate situation we are in with regards to understanding reality in any of these scenarios ends up staying essentially exactly the same. If something turns out to be composed of divisible parts that have different properties from the composed whole, then we can “explain” how the properties of the pieces at the lower level necessarily result in the properties that we see at the higher level (e.g., we can learn that the properties of H2O molecules dispose them towards forming weak molecular bonds, and we can explain that these bonds necessarily result in a substance that things can “sink into” as groups of other molecule slip between these gaps even though nothing “sinks into” individual molecules of H2O). And we satisfy ourselves that this counts as some meaningful clarification of the nature of reality itself.

But what we fail to take account of is that if we postulate that all “explanation” must be of the properties of some whole in terms of the differing properties of some underlying compositional parts, then we eventually reach the “rock bottom” behind which there is no underlying compositional part—no matter what. And no matter when we reach this point, the situation is ultimately no different than it would have been had water itself simply been the thing that could not be divided into any more basic units than “water.” I expect that most of us generally feel that if we had been in the world in which “water” was not divisible into anything more basic than “water”—where these most basic parts were clear, wet, allowed other objects to “sink” into them, etc.—and looked, felt, and behaved exactly like the “water” we observe—so that the only thing left for “science” to do was to catalogue the behavioral properties of the same “water” anyone can see with his naked eye, and there was no room for “science” to clarify anything else—that this would be a world in which we “understood” less about the physical reality around us.

But why should the fact that water just so happens to divide into parts with different properties from water itself render our “understanding” of the nature of physical reality any deeper? We still can’t explain the behavior of those parts. If we think that what explanation consists of is reducing the terms of one event seen at one level of “zoom” into the underlying terms of another level, then eventually we reach the level that is supposed to be the one explaining all the rest—and once we get to that one, we‘re simply going to have no idea why it behaves the way that it does.

Scientific accounts can allow us to explain a in terms of b, and b in terms of c, and c in terms of d, … but once we get down to z, we have to stop there and take the behavior of z for granted and accept that z must necessarily remain absolutely unaccounted for. This is really not fundamentally any different from the situation in which a is explained in terms of b, and b in terms of c, and c in terms of d … but at d, we have to stop there and take the behavior of d for granted and accept that d must necessarily remain absolutely unaccounted for. And it isn’t really fundamentally any different from situation in which we simply have to take the behavior of a for granted and accept that a must necessarily remain absolutely unaccounted for. The only real differences between these scenarios are “How big are the most indivisible pieces of reality?”, or “How many times can we zoom in and find parts that individually have different properties from the thing we’re zooming in on?” But no matter how big the indivisible pieces are, and no matter how many times we can zoom in, the best answer “science” can give about the ultimate nature of physical reality is:

John R. Ross, in his 1967 Constraints on Variables in Syntax, tells a version of a story in which: “After a lecture on cosmology and the structure of the solar system, William James was accosted by a little old lady. “Your theory that the sun is the centre of the solar system, and the earth is a ball which rotates around it has a very convincing ring to it, Mr. James, but it’s wrong. I’ve got a better theory,” said the little old lady. “And what is that, madam?” Inquired James politely. “That we live on a crust of earth which is on the back of a giant turtle.” Not wishing to demolish this absurd little theory by bringing to bear the masses of scientific evidence he had at his command, James decided to gently dissuade his opponent by making her see some of the inadequacies of her position. “If your theory is correct, madam,” he asked, “what does this turtle stand on?” “You’re a very clever man, Mr. James, and that’s a very good question,” replied the little old lady, “but I have an answer to it. And it is this: The first turtle stands on the back of a second, far larger, turtle, who stands directly under him.” “But what does this second turtle stand on?” persisted James patiently. To this the little old lady crowed triumphantly. “It’s no use, Mr. James—it’s turtles all the way down.””

Scientific “explanation” of the nature of the physical world is “turtles all the way down”—right up until we suddenly reach the bottom turtle, and find that this one—the one on which all of the others are resting—is simply floating in mid–air, bearing the weight of all the others with the help and support of nothing. This is, quite plainly, no less mysterious than the case in which the only thing we ever had to begin with was a single levitating turtle. In fact, finding more turtles resting on that bottom turtle doesn’t make the question of the levitating turtle less mysterious—it makes it more so, because it not only keeps the question of how the turtle is levitating intact, it adds the question of how all these other turtles could be resting on a levitating turtle with nothing holding up the whole lot of them..

Now, the philosophical position I’ve been striving to give comprehensive defense to throughout this series has been that consciousness—the first–person subjective stream of qualitative experience and intentionalistic thought which we all experience the world exclusively in and through—can neither be eliminated, nor called “identical to” anything other than itself, nor “built into” the physical world along the lines of panpsychism, nor considered causally “epiphenomenal” with respect to the external world. This process of elimination entails that consciousness is therefore both something as fundamentally different from other physical phenomena as gravitational forces are from strong nuclear forces, and yet something which nonetheless somehow causally interacts with at least some of those other forces.

In physics, gravity and the strong nuclear force are called “fundamental forces” because they appear not to be reducible either to each other or to any other more basic kinds of force. In other words, when we come to these four kinds of forces, we have to accept that this is simply the point where we must be resigned to say: “shit do what it do”—because there is simply in principle nothing else that can be said. And as we saw above, even if two or more of these kinds of force are reducible to something else, then this can only be in terms of some other kind of force which will now have to be the “basic” one which we cannot in principle provide any account of, beyond simply cataloging what it does and then taking this catalog for granted as a primitive observation about how the world operates. In other words, no matter what the details are, at least something has to be “basic” in the sense that it is both irreducible and ultimately inexplicable (because there is nothing more basic which it can be explained in terms of—that is, “reduced to”).

Furthermore, given our current state of knowledge, we think that there are at least three or four such “basic” kinds of irreducible forces operating in the world. Even if this state of knowledge should eventually be overturned, and all four of these forces united into description in terms of some other singular underlying force, this still suffices to show that it is not inconceivable that there should be more than one “basic” kind of irreducible force, which are different in fundamental ways and yet still causally interact with each other within the same singular universe.[2]

Now, the “interaction problem”—the question of how a “nonphysical” mind could interact with a “physical” world—is often taken to be the most disastrous, fatal problem for this kind of position. There are more subtle and complicated forms of the “interaction problem” which I may discuss later, but in its most basic form the “interaction problem” aims to reject dualism by simply expressing incredulity that things that seem to be defined by such different kinds of properties could possibly interact with each other, in principle.

Part of the problem here is simply linguistic: when we discovered that electromagnetic forces were not reducible into terms of atomic interactions, but instead a whole new fundamental category of kinds of things in the world all their own right alongside atomic forces themselves, up to that point in history our definition of what it meant to be “physical” was encompassed by the properties we had noted by observing interactions between atomic particles. When we included electromagnetism into our account, however, we didn’t call electromagnetic forces “nonphysical”—even though it is plain to see that there is in fact a sense of the word “physical” in which electromagnetic forces are “nonphysical!”—rather, we expanded our definition of what it means for a thing to be “physical.” Why shouldn’t we do something similar here? The very word “dualism” itself reinforces the notion that this is off–limits, of course, suggesting as it does a “du–ality”—an etymological root which implies a specifically two–fold distinction—between “things” which are “physical” and “things” which are “non–physical.” But why think of this in terms of “du–ality?” Why not think of it in terms of plurality? In other words, why not think of it as the phenomena of consciousness and consciousness’ capacity to interact with other parts of the world standing right alongside the phenomena of gravity and gravitational forces, right along side atoms and strong and weak nuclear forces, and so on, all as equally irreducible categories of ‘kinds of things the world turns out to contain’ in their own right? Why not think of a multitude of phenomena existing and interacting within the world, each containing different fundamental properties, some overlapping and some not?

The problem with providing a clear definition of what it means for a thing to be “physical” without creating a straw notion of physicalism is often taken as a hurdle against attempts to refute the philosophical position of physicalism—but it is just as much a hurdle against attempts to establish it as a true position over and against alternative positions which could reasonably be called “dualistic”: a definition of physical must suffice to rule out the possibility of dualism turning out to be true in a non–question begging way every bit as much as it must suffice to rule out the possibility of physicalism turning out to be true, and physicalists rarely if ever perform better at this task than dualists. If the physicalist asks the dualist how to define “the physical” such that consciousness couldn’t be a “physical” process without begging the question, then the dualist has every right to turn exactly the same question around and ask the physicalist how to define “physical” such that irreducible consciousness couldn’t possibly qualify as “physical” and therefore be allowed to exist, in the way that the dualist believes that it does, without begging the question.

But it should be clear to see, in any case, how thinking of the phenomena of consciousness in this way renders the “problem” posed by standard forms of the “interaction problem” moot. When it comes to any “basic” irreducible force or phenomena in the world, the question of “how” it does what it does is always, in principle, necessarily mysterious. How does the gravitational force cause objects to gravitate towards large bodies in space? If your answer is that large bodies in space create curvature in the fabric of spacetime, then the question simply becomes: How does a large body in space cause the fabric of spacetime to curve? Again, the point is not merely that we can keep asking “Why?” forever. The point is that there necessarily must be some actual point at which the only question left to ask literally has no conceivable answer other than “that’s just how we observe that phenomena in the world behave”—at which point the Why–asking must stop because it cannot, in principle receive an answer—and the only open empirical question left is just simply, “When have we reached that point?”—“Is this the inexplicable rock bottom, or does rock bottom lie somewhere further?” And we reach this point necessarily any time we talk about the most basic actions and behaviors and properties of the most basic kinds of forces and entities in the world—whatever they may turn out to be.

In other words, the kind of account which the physicalist demands the dualist give to justify the claim that interaction between consciousness and the physical world could occur is one that no one can give for any phenomena in the world whatsoever—yet, if consciousness is indeed itself just such a “basic” phenomena, this is exactly the situation that should be expected. As James Moreland writes, “One can ask how turning the key starts a car because there is an intermediate electrical system between the key and the car’s running engine that is the means by which turning the key causes the engine to start. The “how” question is a request to describe that intermediate mechanism. But the interaction between [consciousness] and [the brain] may be … direct and immediate. [And if] there is no intervening mechanism, [then] a “how” question describing that mechanism does not even arise.”

The problem with this “intuitive” version of the “interaction problem,” I think, quite simply results from the fact that we can’t take a third–person view on someone else’s consciousness and visualize their subjective intentions playing a causal role in their ensuing physical behavior, in the way that we can at least visualize one billiard ball bouncing into another from the third–person point of view. Yet, I think Descartes himself adequately addressed this version of the problem all the way back in 1912: “At no place do you bring an objection to my arguments; you only set forth the doubts which you think follow from my conclusions, though they arise merely from your wishing to subject to the scrutiny of the imagination matters which, by their own nature, do not fall under it.” In other words, we can’t visualize conscious streams of experience existing in any way from the third–person point of view. And the key issue underlying this fact is one that is universal to all positions in philosophy of mind whatsoever—it is, in fact, exactly what makes consciousness seem mysterious in general, no matter what metaphysical view we take towards its ultimate nature: namely, physical properties as such and conscious experiences as such seem wildly unrelated no matter what “theory” we suppose for understanding their relationship.

If the world, at root, is a causally closed process of physical properties following patterns of inert cause and effect on other physical properties, then why the hell should experiences even squirt out of that epiphenomenally? How the hell does anyone ever get the idea in their heads that that in any way doesn’t face an “interaction problem?” It just postulates that the interaction goes in a single direction: from the physical to the experiential. But either interactions between the physical and the experiential can happen, or they can’t. If they can’t, then epiphenomenalism is ruled out as a conceivably true option every bit as much as dualism. And if they can, then there is no reason in principle why dualism couldn’t be true. So anyone who thinks it is even conceivable that epiphenomenalism could be true—and many physicalists are willing to grant that it could be, as a last resort—has no valid recourse to this “intuitive” version of the “interaction problem.” Whether we can only walk across it from left–to–right or we can walk in either direction we choose, a bridge is a bridge. And if we can walk across a bridge from left–to–right, then there can’t be any reason in principle why we couldn’t conceivably walk across it from right–to–left. It’s as if the physicalist who considers the possibility that epiphenomenalism might be true is an atheist who finally goes so far as to say, “Alright, God exists. But God can look into our world from His—we can’t ever travel over to His, in principle! So there still can’t possibly be a Heaven or a Hell!” Would anyone ever consider calling this “Atheism, Or Something Near Enough?”[3] Wouldn’t we think it was demonstrating something about the inherent weakness of atheism itself if atheists were, in any significant numbers, finding themselves compelled to retreat to this kind of position?

What we’re actually dealing with here is simple bafflement upon trying to imagine the two phenomena interacting. But is the idea of your subjective experience of the qualitative taste of a strawberry in and of itself playing a causal role in your later description of what the strawberry tasted like any more difficult or bizarre to imagine than the idea of a series of blind physical particles moving in space literally composing a subjective experience of the qualitative taste of a strawberry? I don’t think so. The intuitive weirdness of interaction can’t be any reason to weigh the scales against dualism if every picture of how consciousness and the physical relate we could possibly imagine is overwhelmingly weird to intuition. We especially can’t do so if we arrived at the hypothesis of dualism by a process of elimination composed of a series of arguments in which we found deductive reasons to reject alternative attempts—which is how do it. (So even if my reasoning in those steps turns out to be flawed at some point, those are the steps around which the whole issue pivots—and the “interaction problem” simply contributes nothing new that is important to the question.)

When I observe the properties of gravity, I take it for granted as a brute fact that a given equation describes the relationship between the mass of an object and the gravitational force it exerts—and how gravity causes an object to move remains inexplicable in principle. If I later discover that this works by objects influencing the curvature of space in proportion to their mass, then I take it for granted as a brute fact that an object of a given mass influences the curvature of space in a given degree—how an object of a given mass causes space to curve remains inexplicable in principle. It is, again, not simply that we can keep asking “Why?” indefinitely and eventually have to stop simply in order to move on to doing something with our knowledge—it is that knowledge itself necessarily reaches a brute stopping point in principle as soon as it arrives at the most basic behaviors and properties of the most basic entities that there are, and it simply remains an empirical detail to be fleshed out what these are. 

Supposing I suggest that the first–person subjective phenomena of consciousness itself is one of them, and supposing I suggest that it is a brute fact about consciousness that it possesses the property of intentionality—of being intrinsically “reflective of” or capable of “representing to itself” or “directing itself towards” an external world—and that as one of the “basic” actions of consciousness, subjectively created and experienced intentions can influence my objective physical behavior—then so long as I do not try to visualize this in terms that would only be appropriate for mechanical interactions between blind physical particles in the first place, there is simply no real conceptual problem here—the fact that intentions influence behavior should be treated as a basic data of first–person awareness in just exactly the same way that the mass of a physical object creates gravitational forces by  influencing the curvature of the surrounding space should, and I no more need “an account” of how the former happens in order to be fully justified in believing that it does than I do in the case of the latter.

And herein lies one significant dialectic difference between interactionism and “identity theories,” etc.: the fact that there is a relationship between physical states and experiences—from the looks of things, in both directions—is a direct datum of experience. The claim that the physical brain composed of parts which are non–qualitative, non–experiential, and non–intentionalistic generates or is identical to consciousness itself (qualitative experience and intentionalistic thought) is not. So first, we do not have the same prima facie justification for believing that the production of consciousness by the brain happens (or that consciousness is “identical to” the physical brain) that we have for the fact of interaction in the first place. But second, the physicalist does not have the option of making these “brute fact” posits in the same way that the dualist does—the very definition of the physicalist claim is that the physicalist himself rules this option out: quite simply, consciousness is not a “basic” entity within the physicalist’s scheme—and it therefore cannot be said to possess “basic” properties. So the physicalist actually holds a burden of “explaining” the appearance of consciousness in a way that the dualist does not—because according to physicalism, consciousness is a secondary, derivative phenomena—which therefore according to the physicalist scheme itself must in fact have an explanation in some other terms—hence the Hard Problem. Thus, there is simply no special burden on the dualist to provide an account of interaction—just as there is no special burden on someone who proposes that gravitational forces are the curvature of spacetime to explain how an object’s mass ‘causes’ spacetime to curve. There is a burden to motivate dualism—just as there is a burden to motivate the claim that gravity does in fact work by causing curvatures in the fabric of spacetime—and I think that this burden can be met (as it can for gravity).

Notice that this is exactly why the gravitational force itself is currently considered to be a “basic” fundamental force in the universe, and why science places the burden squarely on whomever wants to propose a “theory of everything” that reduces gravity to some other more basic force: the claim that gravity is reducible needs to be demonstrated before we can be justified to believe it. And until then, there is simply no a priori reason to suppose that it has to be—no a priori reason that gravity can’t just be the “basic” phenomena itself—so we can’t assume that a “theory of everything” will necessarily succeed until someone actually spells out the details of how the forces we know and currently consider fundamental reduce to some other. This is the most reasonable way to think—yet it is a rule that we violate flagrantly when it comes to consciousness, usually with appeals to some notion of “parsimony.” But—rightly—no one reasons in the same way when it comes to gravity. No one says that it is more “parsimonious” to assume that a “theory of everything” that reduces gravitational forces to some other more basic force must be true. Certainly, no one does so even because the notion of discrete matter interacting with space interacting with time is too weird to accept. The burden is squarely upon the “reductionist” to actually perform the work of demonstrating how gravity “reduces.” And until then, we rightly assume that it doesn’t (pending further notice of an actual proof of how it does.)

When the physicalist supposes that consciousness is not in fact a basic phenomena, and instead of being analogous to the brute existence of the gravity as a basic, fundamental force is instead analogous to the behavioral properties of water, which exist solely in virtue of the very different behavioral properties of molecules of H2O (which water could be described equally as either “identical with,” “emergent from,” or “reducing to”), thus does in fact create a burden of providing an explanation of “how” consciousness appears in the course of this process—because the physicalist himself is the one making the supposition that consciousness appears through some intermediary process. The mechanism of its appearance must therefore be explained—because the physicalist himself, if he is not an outright eliminativist (or panpsychist), is the one supposing that there is a mechanism mediating the process whereby ingredients which are not conscious go through some process to somehow become conscious which should, therefore, be explicable. Immediately, it is not clear how these unspecified mechanisms (which somehow produce something radically unlike themselves in kind, per any definition of the physical and the experiential besides that of the panpsychist who supposes that consciousness utterly pervades the physical world or that of the eliminativist who supposes that no one actually has any subjective experiences or intrinsically intentional states at all) are supposed to be more “parsimonious” than the posit that consciousness itself is simply fundamental—just as it is clear that a “theory of everything” is not a priori more “parsimonious” than the posit that gravity itself is simply fundamental. Especially so when we have no idea what the supposed mechanisms are, how they work, or how they could even conceivably relate the terms of items so radically different (per everyone but the panpsychist or the eliminatist’s definition).

But this opens up an even further, an even stronger argumentative possibility that does not exist against the posit that consciousness itself is a “basic” phenomena in the world: namely, that we might pose a successful argument to defeat the claim that such a mechanism could ever in principle succeed to do what is claimed for it, for all the reasons summarized here and explained in more detail in essays (IV)—(VII) of this series—in short, because if the physicalist posits that all that the world contains at root is mechanism, then this supposition is incapable in principle of predicting anything other than further mechanisms—and a description of the qualitative nature of subjective experience and the intentionalistic nature of conscious thought simply can’t be built up to through descriptions of the mechanisms that happen to accompany experience and thought (a summary this brief can’t come anywhere close to doing the argument justice), any more than there is some special way of drawing lines on a flat two–dimensional canvas that can build up to a fully–fleshed three–dimensional object. In just the same way that the very nature of a three–dimensional object includes a category—the third dimension—that can’t be “reached” in principle through the two dimensions provided by the canvas’ surface, so consciousness includes categories (subjectivity and intentionality) that can’t be “reached” in principle through the mechanisms provided by the physicalist’s ‘objective’ blind mechanical processes. But comparatively, there is no reason in principle why interaction between consciousness and the physical world cannot occur—the question asks something that commits a category error within the very question, essentially similar to asking how an object’s velocity at one moment causes it to keep moving through space in the next moment—and draws its intuitive force simply by asking us to imagine interaction in an inappropriate way: from the third–person perspective, which is exactly where the dualist suggests that consciousness cannot be seen in the first place.

Relocating the act of imagining to the first person perspective, there is no intuitive problem: I set an intention to move my hands, and they move. This is just as direct a piece of data in my immediate awareness as any data about any external phenomena like gravity could ever possibly be. Justifying the claim that conscious experiences and physical particles are “identical” would take a lot more than simply insisting that we can’t visualize that sort of interaction taking place from the third–person stance—when the whole core of the dualist insight to begin with is precisely to see that the entire phenomena of consciousness can’t be found “from the third–person stance” in the first place. If the inability to visualize a process is enough to defeat a position in philosophy of mind, then my arguments against physicalist views become even stronger, because so far as it is they’re based on positive arguments that we can’t visualize mind–brain “identity” because the claim is incoherent for principled reasons. If all it takes for a successful argument is difficulty with visualization, then these other arguments go well beyond the burden required of them, and physicalist views would be all the more disqualified at exactly the same time by the same stroke.

The second part of this post will discuss a refined version of the argument against the possibility of interaction which presents a much greater threat—and indeed, may be the one and only real empirical threat that dualism has ever actually faced. In his discussion of his view of the problem interactionism poses, Dennett devotes just two sentences to this aspect of it, though I consider it by far the most significant: “A fundamental principle of physics is that any change in the trajectory of any physical entity is an acceleration requiring the expenditure of energy, and where is this energy to come from? It is this principle of the conservation of energy that accounts for the physical impossibility of “perpetual motion machines,” and the same principle is apparently violated by dualism.” This line of argument actually attempts to take specific principles which seem well justified by our scientific observations of the world and argue that interactionism seems to require that we haphazardly violate them. A premise like this could actually provide well motivated grounds for offering specific empirical reasons why a “naturalistic” approach to understanding the nature of human consciousness and the relationship between the “mind” and the brain cannot be “dualistic” which go beyond mere verbal sleight–of–hand through question–begging definitions of terms like “natural” and “physical.”

We’ll see in a future post how this much more troubling version of the argument fares.

  _______ ~.::[༒]::.~ _______

[1] This is a simplified example, since only one illustration is needed for the purpose of demonstrating the underlying point we’re discussing here.

[2] See here for an overview of scientific attempts to achieve this. There is no currently plausible grand unified theory—grand unified theories attempt to unite electromagnetic with strong and weak nuclear forces, while leaving out gravity—the goal of a “theory of everything” which incorporates gravity into the analysis presently seems even more implausible to achieve. It may eventually happen; but again, there is simply no a priori reason to assume that it necessarily must.

[3] The materialist–turned–epiphenomenalist Jaegwon Kim’s book is titled “Physicalism, Or Something Near Enough.”

New EP: “In Appreciation of Limits”

A lot of this recording is rough, as I’ve been too rushed to really sit down and perfect any of these songs, so I allowed a fuck of lot of small errors to stay in and this is basically a way–more–lazy–than–I’d–prefer “first take”.

Honestly, once the first track starts getting weird and too quiet, … just skip to the third.

The first track starts with a brief 40 second sample illustrating my basic style of song–writing: simple guitar parts that overlap in interesting ways to create something that feels more intricate than the sum of its parts. I like the parts to be simple enough that I can hear them all individually, but mesh together in a way that makes listening to all of them at once a more involved, meditative sort of act. Eventually (if the urge strikes) I may expand that basic template into a more developed song. I really, really love the feel of it and I’m probably going to expand on it whenever I get a decent chance to sit down and really work it out. (For now, I’m sending my recorder across the country ahead of me before I take off on my own journey to where it’s headed.) The rest of the track after that 40 seconds verges into an alternate tuning transcription on acoustic guitar of The Weeknd’s “The Town” (actually, when I recorded the brief demo, I didn’t realize this was on the rest of the track. So that’s why the volume is screwed up—and now I’m out of CD–R’s. I’m sorry. You can hear it well with headphones, though.)

The next two tracks are a two–part series that starts with an … industrial beatbox? intro and then verges into a creepy, slightly Azam–Ali–inspired vocal–backed spoken word drawn from something I found I had written a long time ago and couldn’t for the life of me remember anything about (I’m guessing I was probably ridiculously high when I wrote it down). The second part takes the template of the rhythm from the spoken word and transforms it into an acoustic instrumental. (At which point it sucks way less)

I’ll be posting more “EP–style” collections of a few short songs like this at a time in the future.

Listen here: In Appreciation of Limits

Is the War on Drugs Racist? The Surprising Truth Behind the Black Curtain of History

What about the drug war? The notion that the drug war in particular is especially racist is one that is widely accepted across the whole political spectrum. Michelle Alexander, author of ‘The New Jim Crow: Mass Incarceration in the Age of Colorblindness’ writes that “The drug war was motivated by racial politics, not drug crime … [it] was launched as a way of trying to appeal to poor and working class white voters, as a way to say, “We’re going to get tough on them, put them back in their place”—and ‘them’ was not so subtly defined as African–Americans.” Articles in Time Magazine tell us that “black youth are arrested for drug crimes at a rate ten times higher than that of whites. But new research shows that young African Americans are actually less likely to use drugs and less likely to develop substance use disorders, compared to whites….” Even Ron Paul, known for a series of newsletters containing statements like “Carjacking … It’s the hip-hop thing to do among the urban youth who play unsuspecting whites like pianos,” believes it. Quote: “ … minorities are punished unfairly in the war on drugs. … blacks make up 14% of those who use drugs, yet 36% of those arrested are Blacks and it ends up that 63% of those who finally end up in prison are Blacks. This has to change. … We need to repeal the whole war on drugs.”  [1]

Yet, we saw in the last post that there is very good reason to believe that, whatever the ultimate cause, African–Americans are not arrested disproportionately to their numbers in other crimes. Contrary to popular impression, drug policies are responsible for very little—relatively speaking—of the disparity in incarceration rates between whites and blacks. The fact that incarceration rates for crimes involving drugs correspond so closely to these general incarceration rates should, in and of itself, immediately make us skeptical of the claim that African–Americans are imprisoned so disproportionately here. (It will turn out that a surprising hell of a lot is simply wrong about the data underlying this claim.) Stephan and Abigal Thernstrom, in the 1997 America in Black and White: One Nation, Indivisible – Race in Modern America provide us with some of the relevant numbers: “In 1980, before the antidrug crackdown, African Americans were 34.4 percent of the inmates in federal prisons, and 46.6 percent of those in state penitentiaries. By 1994, their share of the federal prison population had risen only slightly, to 35.4 percent. Among state prisoners, the black proportion rose three points between 1980 and 1993 to 49.7 percent.”

They continue: “The black prison population would be smaller, but not much smaller, if the drug laws were different—if, for example, crack and powder cocaine were treated identically. But a calculation made on data for prisoners newly admitted to penitentiaries in thirty–eight states in 1992 indicates that if the percentage of black men serving drug sentences had been reduced to the figure for white men, the black proportion of the total would have fallen from 50 percent to 46 percent. Again, not a trivial difference, but hardly a monumental one. A similar calculation done for prisoners in federal facilities yields even less of a difference; we could have made the proportion of black males sent to federal prisons for drug offenses in 1994 identical to the proportion among white males by setting free just 855 African–American men, a mere 3 percent of those sent to a federal penitentiary that year.”

The charge that the criminal justice system as a whole is racist, then, simply cannot hang on the rate of arrests for drug use—at worst, drug arrests are an exception to an overall pattern of racially disproportionate arrests which are justified by the racial proportions of crime. But critics who argue that the criminal justice system as a whole is racist almost always extend this argument to the racial breakdown of arrests for crime in general as well, and as we saw in the previous post, this claim is put down by the fact that victim and witness reports—who have every possible reason not to lie—indicate a larger racial disproportion in rates of crime than are suggested by police arrest rates. The fact that critics who make charges of racism are wrong in the case of all other crimes should make us skeptical of the notion that this is the sole case in which they suddenly have it right, and the fact that the racial proportion of arrests for drug crimes so closely matches the corroborated proportion of arrests for other crimes should instantly make us skeptical that this is the sole case in which the same rates of arrest are suddenly so wildly disproportionate.

  _______ ~.::[༒]::.~ _______

But before we get into the statistics, a timeline of relevant history is in order.

In 1956, the white Reverend Norman C. Eddy of the East Harlem Protestant Parish “opened a store–front drop–in center and a clinic where addicts received a physician’s services, referrals to hospitals, assistance in job searches, psychological counseling, and legal assistance for those facing criminal charges.”

Quickly, the EHPP became overbooked: “with a staff of three … the EHPP storefront recorded visits from 2,175 individual users (or 5 percent of the nation’s addicts, according to FBN statistics).” Faced with these prospects, the EHPP joined with five other programs to become the New York Neighborhoods’ Council on Nacotics addiction in 1959—and after successfully lobbying state government for additional hospital beds and after–care programs for detoxified users, went on to help pass  the Metcalf–Volker Narcotic Addict Committee Act signed by Governor Nelson Rockefeller that allowed addicts arrested for possession to choose in–patient treatment in a state hospital rather than jail. [2] Yet, despite their efforts, “A study of a single block (100th Street between First and Second Avenues) that followed residents over four years found that one–third of the sixty teenagers interviewed in 1965 had become heroin users by 1968.” [2] A staggering half of the entire nation’s addicts lived in New York State by the end of 1963—almost 23,000 of the nation’s 48,000. [3]

Over time, it became more and more apparent that these social programs weren’t resolving the problem. As black citizens continued to take the brunt of the impact of social dysfunction from rampant drug abuse, the tone—driven by black backlash against what was considered to be a failed liberal approach to addiction and crime, and police apathy towards the plight of black victims of crime and drug abuse—became increasingly militant. I draw here as one of my primary sources particularly of citations of newspaper clippings that can’t be found online from Michael Javen Fortner, the African–American Assistant Professor and Academic Director of Urban Studies at the Murphy Institute at CUNY SPS. Many of these will be taken from his The Carceral State and the Crucible of Black Politics: An Urban History of the Rockefeller Drug Laws, which “examines how African American mobilization for greater public safety in Harlem shaped the evolution of narcotics–control policies in New York State from 1960 until 1973,” objecting to “prevailing theories [of the origins of the drug war which overemphasize] … the exploitation of white fears … or the political strategies of Republican political elites,” and “ignore the ‘invisible black victim.’”

In 1961, Mark T. Southall, member of the Urban League and NAACP, tells a Democratic Party hearing that “[Harlem] is slowly and surely becoming a cesspool of the dreadful narcotics racket … Churches are constantly being robbed by addicts … Ministers and other citizens of the community are being mugged, beaten, and robbed by addicts, who also are guilty of rapes, pickpocketing and many other crimes, daily and nightly.” [4] In 1962, the Reverend Oberia Dempsey led a seven–week drive to “Urge the president to mobilize all law enforcement agencies to unleash their collective fangs on dope pushers and smugglers … urge Governor Rockefeller to also push a similar crackdown … [and] spur Mayor Wagner and Police Commissioner Michael Murphy to turn loose the city’s police … [on] narcotics dealers.” [5] In 1968, Dempsey (who “always carried a .32–caliber pistol … even in church”) recruits “volunteers from among retired policemen, guards and others who had been trained and held pistol permits” to take immediate action to repel “pushers.” The theory that activists like Dempsey were ‘uncle Toms,’ Fortner writes in “‘Must Jesus Bear the Cross Alone?’: Rev. Oberia Dempsey and His Citizen’s War on Drugs”, cannot explain his movement’s “grassroots character … the petitions signed by thousands, the marches and rallies, the letters to editors, appearances at hearings, town halls, and emergency meetings.”

An article published by Ebony in 1970 discusses State Senator Waldaba Stewart’s support for “groups in Harlem … known as Black Citizen Patrols. The no–nonsense groups have served notice that “we’re going to have to keep the heat on every spot that’s well–known as a dope drop. … we document an area as a drug drop. Then we turn our report over to the police. If nothing happens, … we barricade the place … Our last step is to have citizen arrests made by our members who are off–duty black policemen.” Similar articles over this period of time include “Harlem Vigilantes Move On ‘Pushers’” published in the Chicago Daily Defender in Jun. 23 of 1965, and “Addicts’ Victims Turn Vigilante” published in the New York Times in 1969. [6]  The Black Liberation Army ran a campaign to “Deal with the Dealer” by identifying the “hangouts” of prominent drug dealers and manufacturers and raid them. [7] In some cases, drug dealers were killed—both Assata Shakur and Hubert Gerold Brown, chairman of the Student Nonviolent Coordinating Committee, were involved in trials related to underground attacks on drug activity in black communities.[8] A New York Times report warned in 1968 that Harlem “could become a community of gunfighters, reminiscent of the Old West, if the law failed to protect black citizens from outlaws.”

 _______ ~.::[༒]::.~ _______

In 1970, the Congressional Black Caucus took one of its first formal actions when the 12 black members of the U.S. House of Representatives met with President Nixon under that name, presenting him with a document which outlined 61 recommendations they requested the President consider—an opportunity the members had requested since the year previous when the organization was first founded as the “Democratic Select Committee,” and which they obtained only after having taken the dramatic and unprecedented step of boycotting the President’s State of the Union Address. In the document, they wrote: “We strongly urge that drug abuse and addiction be declared a major national crisis. … Since organized crime is the principal distributive mechanism of hard narcotics, we urge that Justice Department manpower for investigation and prosecution in that area be substantially increased.”

 That same year, Congress passed the Comprehensive Drug Abuse Prevention and Control Act, containing the Controlled Substances Act—“the legal foundation of the government’s fight against the abuse of drugs and other substances… regulating the manufacture and distribution of narcotics, stimulants, depressants, hallucinogens, anabolic steroids, and chemicals used in the illicit production of controlled substances.” Of the ten African–American representatives in Congress at that time, three of the five who voted (Robert N. C. Nix, Sr.; George W. Collins; and Shirley Chisholm) voted ‘Yea.’ (John Conyers and Bill Clay voted ‘Nay.’ William Dawson, Adam Clayton Powell, Jr., Charles Diggs, Gus Hawkins, and Louis Stokes abstained.)

Ironically, it was in 1972 that a group composed mainly of white conservatives recommended the legalization of marijuana. Governor Raymond P. Shafer, chairman of Nixon’s National Commission on Marijuana and Drug Abuse (created by the Controlled Substances Act) wrote in the final report that “Neither the marihuana user nor the drug itself can be said to constitute a danger to public safety … [T]he criminal law is too harsh a tool to apply to personal possession even in the effort to discourage use. … The actual and potential harm of use of the drug is not great enough to justify intrusion by the criminal law into private behavior, a step which our society takes only with the greatest reluctance.” Public support for legalization remained under 20% for the most part until 1993, and so far as I can tell there was no significant African–American advocacy either within or outside of Congress for the measure.

However, 1973 was the year a very different law that was established in the State of New York—which did receive widespread and notable support from a large chunk of African–Americans, leaders and public alike—marked a significant historical turning point in the history of the war on drugs. In 1962, it had been New York Governor Nelson Rockefeller who signed the Metcalf–Volker Act in response to the petitions of the Reverend Norman C. Eddy and others, allowing arrested addicts to choose in–patient treatment in a state hospital over jail. Rockefeller had taken office expressing staunch opposition to “[conservative] extremists [who] feed on fear, hate and terror … [and] have no program for America—no program for the Republican Party … no solution for our problems of chronic unemployment, of education, … or racial injustice….”

But in December of 1965, Rockefeller had begun holding meetings with “Harlem officials and a follow–up closed session with an influential group of Negro leaders” to discuss the rising drug problem. In a unity of middle– and lower–class black interests, these “influential group[s]” included members of the St. Philip’s Episcopal Church whose “members were considered ‘the better element of colored people’” [10] as well as members of Salem Methodist, which was described as refusing to cater to “the tastes of the black bourgeoisie.” [11] Less than a month later, when Rockefeller delivered his message to the opening session of the legislature in the new year, his tone had changed: now he spoke of the need “to act decisively in removing pushers from the streets and placing addicts in new and expanded state facilities for effective treatment, rehabilitation, and after care.” [12]

So in 1966, he passed the Narcotic Addict Rehabilitation Act which allowed, for the first time, for addicts to be compulsively treated if they had been accused of a crime (and allowed magistrates to compel treatment in a civic center even if they had not)—but even a year after this bill had apportioned the state with $75 million for the creation of rehabilitation centers, the complaints and problems continued. In 1967, residents sought meetings with the Police Commissioner Howard R. Leary, complaining that drug–related crime “forced merchants to close their shops early and brought armed civilian patrols into the streets”—while very clearly blaming “addicts for the purse snatchings, the muggings, the burglaries and the beatings.” [13] Still in 1968, the pastor of Harlem’s Second Friendship Baptist Church estimated that “90 percent of the people refuse to come out at night … even on Sunday…” in fear of drug–related violence. [14]

After analyzing homicide data from 1950–1980, Charles Murray writes that “it was much more dangerous to be black in 1972 than it was in 1965, whereas it was not much more dangerous to be white.”  Then, as now, the majority of victims of minority acts of violence were minorities themselves: “In New York City, seventy percent of the victims of homicides, muggings, and narcotics pushers were African Americans and Puerto Ricans.” [9] In 1970, “Thirty three percent of nonwhites identified drugs and crime as major issues while only 18 percent of the entire sample [skewed by that 33% of nonwhites] mentioned either drugs or crime”. [15] And in 1973, 71% of blacks favored life sentences without parole for “pushers.” [16]

One of Rockefeller’s closest aides and speech–writers, Joseph Persico, tells the story on pp.142–144 of The Imperial Rockefeller: A Biography of Nelson A. Rockefeller of how, in 1972, Rockefeller encountered William Fine, the president of a department store and chairman of a rehabilitation program whose son struggled with addiction, and asked Fine to visit Japan to learn why the nation had one of the world’s lowest addiction rates. In his response to Rockefeller, Fine wrote: “The thing that impressed me most of all is the single minded conviction they have that public interest is above human rights when it comes to an evil. … the human rights of those who get involved in narcotics, or push narcotics, are brushed aside—quickly, aggressively, and with little or no recourse…It is incredible to me that they have had such success, but then, it really all comes down to what people are willing to give up to get, and the Japanese, obviously, were willing to give up the soap box movement on human rights in order to rid the public of the evil abuses of drugs.”

And so it came to pass that in 1973, Governor Nelson Rockefeller’s drug laws were passed in New York, marking a dramatic change in the history of the “war on drugs”—they were the first to promote harsh penalties and mandatory minimum sentences for possession. As this insightful paper notes, “Governor Nelson Rockefeller did not root his campaign for harsh new drug laws in the politics of white racial backlash. Instead, he championed the laws by publicizing their endorsement by several African American community leaders from Harlem”—such as those covered by the articles in Ebony magazine in 1970 and the Chicago Daily Defender in 1965. Whereas the paper notes that “… liberals and Democrats were equal partners in embracing and promoting law and order in the 1960s and 1970s and creating the laws that led to mass incarceration,” the Wikipedia entry lists libertarian economist Murray Rothbard, conservative public intellectual William F. Buckley, and “many in law enforcement” (along with civil rights activists) as some of the most notable opposition.

In fact, when William F. Buckley held debate against the war on drugs in 1991, his opponent in the debate was Charles Rangel—a black Democrat representing Harlem. In the debate, he asks Buckley: “Why is it that when we talk about this drug problem … you put on blinders, and you find … one of the things that is not working … Why do you just say ‘legalize?’ Why don’t you talk about education? If we were not making progress in the Middle East, because the Army was not moving forward, but the Air Force was actually doing a tremendous job, would you say ‘eliminate the Army?’” Later, in 1989, Rangel was profiled by Ebony magazine, where he was called a “front–line general in the war on drugs”—and the article quotes him as condemning what he called Nixon’s “lackadaisical attitude” towards drugs.

The response of Glester Hinds, the head of Harlem’s People’s Civic and Welfare Association, to the new law? “I don’t think the governor went far enough … his bill [should include] capital punishment because these murderers need to be gotten rid of completely. Yet because of the bleeding hearts that we have, the legislators try to be pacifistic in having laws that do not work.” When NYU Law School Professor and former staff attorney for the NAACP Leroy D. Clark spoke against the measure, he acknowledged that he spoke against a large percentage of the black community: “…[We] must be vigilant and keep our eye on what may be someone’s hidden agenda … I ask for a restraint, which our communities now do not feelbecause they feel the community is being immobilized by the addict.” [17]

As Michael Javen Forner writes in Invisibility and Imprecision in the Historiography of Mass Incarceration, “Although blacks constituted 14% of New York City’s population in 1960 and around 19% in 1970, they constituted a disproportionate share of deaths due to drugs, representing anywhere from 50% to 60% of all such deaths from 1960 until 1973. In fact, this rate dips below 50% only after the passage of the drug laws in 1973.”

_______ ~.::[༒]::.~ _______

A key thread running throughout much of black sentiment across these periods of time was that society’s demonstrations of racism were in its not giving enough of a damn to do something to stop the epidemic because it didn’t care enough to do something to help black victims—exactly the opposite of how the situation is viewed today. In 1970, an Ebony magazine piece covering grassroots efforts to fight drug use in black communities begins by noting that Mothers Against Drugs—“which urges community people to record the names, addresses, and license plate numbers of known traffickers, suppliers, and pushers” sends this information directly “to the district attorney’s office”—spitefully skips over local police because they believe resentfully that police “simply don’t care about drugs in black communities.“ 

It opens in the first paragraph with this quote from a grieving mother: “You know the best way to deal with the dope problem? Get as many white kids on it as possible! The best news I’ve heard in a long time is that more white kids are getting hooked on heroin. If I had the money I’d buy it and give it to them free!” 

The Knapp Commission provided some empirical support for this perception when, in 1972, it investigated police corruption and concluded that the biggest problem was with the “overwhelming majority … who accept gratuities and solicit five– and ten– and twenty–dollar payments … but do not aggressively pursue corruption payments.” The report noted that: “At the time of the investigation certain precincts in Harlem … comprised what police officers called “the Gold Coast” because they contained so many payoff–prone activities, numbers and narcotics being the biggest.”

 _______ ~.::[༒]::.~ _______

The story still wasn’t over.

The late 1970’s and early 1980’s saw the rise of crack cocaine, and a rise in drug–related crimes once again came with it.

Alfred Blumstein and Joseph Wallman write in their 2006 volume, The Crime Drop in Americathat “A focus on New York City is easily justified by its bellwether role in national drug and violence trends and its hugely disproportionate numeric weight in those trends” In chapter 6, “The rise and decline of drugs, drug markets, and violence in New York City,” (pp.164–206) the authors document that the epidemic in New York City “peaked” between 1987 to 1989—when 70% of all arrestees tested positive for either crack or powder cocaine in urinalysis. Across the fifteen years between 1960 and 1975, there were an average of just 1,066 murders per year. But from 1975 to 1986 when the law was passed, there were an average of 1,941 murders per year—almost twice as many. “U.S. Sentencing Commission statistics show that 29 percent of all crack cases from October 1, 2008, through September 30, 2009, involved a weapon, compared to 16 percent for powder cocaine;” and it is plausible, especially in light of all the other facts listed here, this association was also true in the past.

In 1982, the Congressional Black Caucus released the “Black Leadership Family Plan for the Unity, Survival and Progress of Black People.” The document, penned by civil rights icon and DC representative Walter Fauntroy—who led the prayer at Dr. Martin Luther King, Jr.’s funeral—includes criticism that “diminished drug enforcement increases [black youth’s] vulnerability to drug abuse” and warns that the “incidence of crime in black communities is increasing because of intentional and unintentional failure on the part of law enforcement agencies to provide adequate protection”—finally urging police, once again, to “increase drug enforcement efforts.”

Ta–Nehisi Coates, in The Beautiful Struggle: A Father Two Sons, and an Unlikely Road to Manhood discusses (on pp.29–30) his own recollection of the time period: “When crack hit Baltimore, civilization fell. Dad told me how it used to be. In his time, the beefs were petty and stemmed from casual crimes. … The bad end of a beef was loose teeth and stitches, rarely shock trauma and “Blessed Assurance” ringing the roof of the storefront funeral home. … The world was filled with great causes … But we died for sneakers stitched by serfs, coats that gave props to teams we didn’t own, hats embroidered with the names of Confederate states. I could feel the falling, all around. The flood of guns wrecked the natural order.” In 1987, two veteran civil rights activists, Reverend Hosea Williams and comedian Dick Gregory, began a 40–day fast camping alternately outside the White House, U.S. Capital, and New York Stock exchange to protest drug abuse and “send a telegram to President Reagan asking him to commit more Federal money to the fight against drug abuse.” In a later, 1986 speech, Fauntroy declared that “Drugs—and now ‘crack’—are indeed the source of threat to all civilized society and each of us must accept 100% of the responsibility for eliminating this threat in our midst….” And it was in 1986 that the first major piece of federal drug war legislation, The Anti–Drug Abuse Act, created the well–known 100–to–1 crack–cocaine sentencing disparity.

Returning to America in Black and White, the Thernstroms write: “Critics of the war on drugs … allege [that this policy was] blatantly racist, because crack tends to be used by blacks and powder cocaine by whites. If so, it is certainly peculiar that the Congressional Black Caucus backed the law, and that some of its members proposed even tougher penalties on crack. They knew that crack was much more common in black neighborhoods than in white ones, and that more blacks than whites were likely to be incarcerated as a result of the change. And in fact, that was precisely their reason for supporting the legislative change: a conviction that it might reduce the havoc on the streets where their constituents lived.” Of twenty–one black members of Congress at this time, seventeen are listed here as co–sponsors of the bill: Charles Hayes1, Alton R. Waldon, Jr.2, Mitchell Parren3, Charles B. Rangel4, Harold Ford, Sr.5, Julian C. Dixon6, William H. Gray III7, Mickey Leland8, Mervyn M. Dymally9, Major R. Owens10, Edolphus Towns11, Alton R. Waldon, Jr.12, Bill Clay13, Cardiss Collins14, Ronald Dellums15, Louis Stokes16, and Walter Fauntroy himself17. Only Gus Savage, Alan Wheat, George W. Crockett, Jr., and John Conyers fail to make the list—whether by ‘Nay’ or abstention is unclear.

The years of 1975–1986, as previously noted, saw an average number of 1,941 murders in the State of New York per year. But by 1995, the number had returned closed to the previous rate—1177—and has continued falling since then, with an average between 1995 and 2014 of just 634 murders per year. In fact, the years 2013 and 2014 both saw a total of less than 328 murders each.

 What happened?

Franklin Zimring, professor of law and chairman of the Criminal Justice Research Program at the University of California at Berkeley, discusses the rise in crime experienced during the second half of the 1980’s, and the drop in crime experienced during 1990–2000 in The City That Became Safe: New York’s Lessons for Urban Crime and Its Control. Notably, Zimring is no “law–and–order” conservative—his 2003 book The Contradictions of American Capital Punishment notes that the death penalty has been most actively used by the same states in which the most lynchings historically occurred. The City That Became Safe argues against the infamous “broken window theory,” advanced by conservative social scientist James Q. Wilson, that harsh treatment of low–level offenses was responsible for drops in crime across this periods of time. He critiques James Q. Wilson’s claim (on pp.83–87) that an increase in youth would lead to proportionate increases in crime.

He also argues (pp.90–99), correlating hospitalizations and deaths from overdose with changes in the known street price, that overall use of cocaine appears to have remained relatively constant [Update 4/20/2016: However, see this footnote] across the period of time in which New York City’s crime drop took place. Yet, he notes (pp.91–92) that “The peak rates of drug–involved homicide occurred in 1987 and 1988”—the same year that 70% of arrestees were found to test positive for cocaine—“and the drop in the volume of such killings is steady and steep from 1993 to 2005. … The volume of drug–involved homicides in 2005 is only 5% of the number in 1990.” Meanwhile, whereas 70% of arrestees in the late 1980s tested positive for cocaine, by 1991 (see table 2 on page 14) this number hit a low of 62%—and in 1998 it had fallen all the way to 47.1%. By 2012 (see figure 3.7 on page 45) this number fell even further to 25%.

What happened here? Why would drug use amongst arrestees fall if drug use as a whole remained constant? Zimring has an important answer: “If I’m a drug seller in a public drug market and you’re a drug seller in a public market, we’re both going to want to go to the corner where most of the customers are. But that means that we are going to have conflict about who gets the corner. And when you have conflict and you’re in the drug business, you’re generally armed and violence happens. … Policing … [helped drive] drug trade from public to private space. … [this] reduced the risk of conflict and violence associated with contests over drug turf. The preventive impact [of these policies] on lethal violence seems substantially greater than its impact on drug use. … [And] once the police had eliminated public drug markets in the late 1990s, the manpower devoted to a special narcotics unit [whose funding had increased by 137% between 1990 and 1999] dropped quite substantially [and yet the policies’ impacts on homicide rates remained].”

Critics of the drug war often imply that drug–related violence is a result of the criminalization of drugs creating black markets. The history of New York seems to suggest exactly the opposite: drugs created drug–related violence and turf wars; and the existence of these is exactly why black victims of drug–related violence agitated originally for increased penalties towards drugs.

Furthermore, this fact gives us one reason minorities may be legitimately arrested in disproportionate numbers for possession of drugs even if total rates of use are in fact constant: minorities are disproportionately involved in public drug trades where violence and turf wars are more likely to occur. In decrying James Q. Wilson’ “broken window theory,” he emphasizes another important way that drug policy impacted crime: “Marijuana was not a priority of the New York City police, yet they had a huge number of public marijuana arrests. Why was that? That was because they were only arresting minority males who looked to them like robbers and burglars and they used as a pretext the less serious crime arrest to find out whether the particular person they were arresting had a warrant out for a felony and was a bad actor. … The good news is that drug violence went down tremendously. There are a couple of different ways in which the police department measures the number of killings associated with drug traffic in New York; both of those measures that they use are down more than 90 percent so that the streets themselves have been changed, people can walk there, and the number of dead bodies associated with illegal drug traffic has gone way, way down.

Regarding marijuana arrests, Zimring notes that “While the gender distribution of marijuana users is close to 50–50, the gender distribution of arrests is 93% to 7%”—which parallels the disproportionately male gender distribution of crime. In other words, the gender distribution of marijuana arrests and the gender distribution of crime parallel each other in the same way that the the racial distribution of marijuana arrests and the racial distribution of crime do—and no one ever assumes that “stop and frisk” policies are an expression of anti–male, or misandrist, gender bias. Zimring concludes: “This is only circumstantial evidence that the police are going after robbery risks, but it is conclusive evidence that they aren’t trying to go after marijuana as a threat to the quality of life.” Indeed, even critics of these policies acknowledge that “Marijuana stops are more prevalent in precincts where… “high–crime area” justifications are more likely to be reported….” Critics may be right that marijuana stops are only a “pretext” for the real reasons for the arrest—but the real reasons just might be valid suspicion of crime, which correlate with race simply because different racial groups do in fact commit different proportions of the total amount of crime, and not bias on the basis of race alone—no more than the policy’s gender imbalance proves that it is a simple pretext for targeting men simply because society despises masculinity. If drug arrests take place on valid grounds of suspicion of criminal behavior, then this may, in fact, be one valid reason for the racial percentages of drug arrests to exceed the racial percentages of drug use even if the former is in fact disproportionate to rates of personal use. 

 _______ ~.::[༒]::.~ _______

That brings us to the question of the policies known as “stop and frisk.”

On Tumblr, the author of ‘Racism Still Exists’ gives us the usual story—black people are stopped and frisked disproportionate to their representation of the population, and that disparity is all it takes to reach a conclusion of racism: “Black people comprise 26% of the city, but they are 52% of those who are stopped.  On the other hand, White people are 47% of the population, but they are only 9% of those who are stopped.” What goes ignored in this comparison is, as usual, the actual murder and crime rate—quoting Heather MacDonald: “Blacks are 66 percent of all violent–crime suspects, according to the victims of and witnesses to those crimes. Blacks commit around 70 percent of all robberies and about 80 percent of all shootings in the city. Add Hispanic shooters, and you account for 98 percent of all shootings in the city. Whites, by contrast, were only 5 percent of all violent crime suspects in 2011. According to victim and witness reports, white suspects commit barely over 1 percent of all shootings and less than 5 percent of all robberies.” Thus—once again—the actually relevant comparisons suggest that it is whites who are “victimized” disproportionately to their actual representation of the crime rate. If we call a suspect who is unlikely to actually be involved in a crime an unjustified suspect, then there are in fact statistically more unjustified white suspects than there are unjustified black suspects inconvenienced by “stop and frisk” policies.

The author also tells us, citing this New York Times article, that “In Brownsville, residents stated that they were frequently stopped and or ticketed for entering their own or friends’ homes in public housing because they did not use a key—but that was because the front door lock was broken.” Of course, Brownsville’s demographic is 76.7% black and only 2.6% white. But the author fails to mention that Brownsville ranks 69th out of all 69 boroughs in New York for murder. Kensington, a borough in Buffalo, has the lowest murder rate and has a similar racial demographic spread: 82.3% black and only 11.5 white. Yet just 2% of the population of Kensington is stopped and frisk, compared to 29.1% for the population of Brownsville. The chart on page four makes it clear that Brownsville has the highest, and Kensington the lowest, stop–and–frisk rates of all boroughs in New York. Once again, this corresponds exactly to the murder rate across New York, which is highest of all in Brownsville and lowest of all in Kensington. The rate does not increase in Kensington because it has a higher proportion of black resident—it falls, because it has a lower rate of murder and crime. (Intriguing note: Jewish neighborhood watch groups called Shomrim are known to conduct patrols in Kensington, and coordinated a 5000–person volunteer search for a missing boy in coordination with police in 2011—then sought and successfully found his killer.)

During the 60’s, 70’s and 80’s, police apathy towards minority victims of minority crimes was the target of accusations of racism. Today, we see that police enthusiasm, too, is unacceptably racist. But if “stop and frisk” policies work, then their primary beneficiaries are, in fact, minorities. Just as minorities commit a disproportionate amount of the United States’ crime, so too are they disproportionately the victims of it. Heather Macdonald writes: “Blacks, for example, constituted 78% of shooting suspects and 74% of all shooting victims in 2012, even though they are less than 23% of the city’s population. Whites, by contrast, committed just over 2% of shootings and were under 3% of shooting victims in 2012, though they are 35% of the populace. … Minorities make up nearly 80% of the drop in homicide victims since the early 1990s.”

Do “stop and frisk” policies actually work? In The City That Became Safe, Franklin Zimring notes that there were, at the time of his writing, no studies that adequately controlled for the other policies he documents which changed across the same period of time. Today, however, Colin Lubelczyk writes: “The only study that explicitly poses the question “Does stop and frisk stop crime?” was an unpublished paper by Robert Purtell and Dennis Smith that relied on monthly precinct level data from New York City from 1997 to 2006. After controlling for a large number of variables including the effects of hotspots policing, Purtell and Smith found that stop and frisk helped reduce robbery, burglary, murder, and grand larceny … While other researchers have looked into similar questions as the one posed by Purtell and Smith, they do not isolate stop and frisk as a variable and instead combine its effects with other police strategies like 1) firearm reduction, 2) hot spots policing, and 3) order maintenance policing.” As Heather MacDonald concludes, “To be sure, thousands of innocent New Yorkers have been questioned by the police. Even though such stops may have been justified given the information the officer had at the time, they’re still humiliating and infuriating experiences. But if the trade–off is an increased risk of getting stopped in a high-crime neighborhood versus an increased risk of getting shot there, most people would choose the former.”

 _______ ~.::[༒]::.~ _______

In his discussion of claims that mass incarceration is ‘the New Jim Crow’ in “Racial Critiques of Mass Incarceration: Beyond the New Jim Crow,”  James Forman, Jr.—son of civil rights leader, Student Nonviolent Coordinating Committee, and Black Panther member James Forman; maternal grandson of 1960s investigative journalist and active Communist Party member Jessica Mitford—writes, “One of Jim Crow’s defining features was that it treated similarly situated blacks and whites differently. … But violent crime is a different matter. While rates of drug offenses are roughly the same throughout the population, blacks are overrepresented among the population for violent offenses. For example, the African American arrest rate for murder is seven to eight times higher than the white arrest rate; the black arrest rate for robbery is ten times higher than the white arrest rate. Murder and robbery are the two offenses for which the arrest data are considered most reliable as an indicator of offending.” His purpose here isn’t to point a finger at drug arrests per se so much as it is to discuss the phenomena of “mass incarceration” on its own terms, but he goes on to distinguish drug arrests from arrests for all other crimes: “Because the [Jim Crow] analogy leads proponents to search for disparities in the criminal justice system that resemble those of the Old Jim Crow, they confine their attention to cases where blacks are like whites in all relevant respects, yet are treated worse by law. Such a search usefully exposes the abuses associated with … the drug war,” although “it does not lead to a comprehensive understanding of mass incarceration.”

Yet, we have seen several reasons to be skeptical of these claims: first, the percentage of minorities arrested for drug–related crime is not different from the percentage of minorities arrested for violent crimes in general—and as we saw in the last entry, “Are African–Americans Disproprotionately Victimized by Police?”—and as Forman Jr. agrees—we have overwhelming reason to believe that these arrest rates are in fact not the result of racism, but simple direct response to the percentage of crime African–Americans actually commit: victims report a higher percentage of African–American perpetrators than are arrested by police. It would be incredibly strange, then, if drug arrests are the sole category in which African–Americans are suddenly disproportionately arrested. If racism is the cause for this disproportionality, then why aren’t African–Americans arrested disproportionate to their actual crime rate for violent crimes? Why would this racism suddenly appear in the sole case of drug use, and vanish as soon as a black person burglarizes a home (as we saw in the last entry, “victims tell police that 45 percent of the perpetrators were black, but only 28 percent of the people arrested for that crime were black”)?

Furthermore, it was African–American victims themselves who historically led the most notable efforts to increase the attention law enforcement gave to the epidemic of drug–related crimes, and underlied the most significant historical changes in public drug policy—and racism was then alleged on the basis that law enforcement, and white society in general, didn’t give enough of a damn to do anything about them because they weren’t having to deal with the consequences. 

Where have we gone wrong?

The problem is this: we’ve acquired our estimates of who uses what drugs how frequently by self–report.

Self–reports are notoriously inaccurate. People often have incredibly poor memories—oftentimes, they’re even dishonest. Frequently, they report what they want to believe instead of what actually happened. Reliance on inaccurate self–reports of dietary intake is, in fact, one reason that dietary recommendations are so often contradictory over time. 60% of people who call themselves “vegetarian” ate a hamburger within the last 24 hours. A 2015 paper in the International Journal of Obesity writes of the reliance of self–reports in obesity research that “[The data] are so poor as measures of actual [energy intake] and [physical activity] that they no longer have [any] justifiable place in scientific research.” In an announcement from 17 members of the American Society for Nutrition titled ‘Self-report–based estimates of energy intake offer an inadequate basis for scientific conclusions,’ the authors write that “the magnitude of the bias may even have increased in recent years”—“motivated,” as they put it, “by social desirability:” in other words, because people say what they want to believe. And people apparently want to believe that they’re eating less now even more than they did in the past. Other research finds that people with a “diagnosed medical condition” are much more likely to overreport their meat intake—a fact that may have caused meat intake to become more associated with illness in epidemiological studies than it really is by sheer statistical fluke. Women were even found to be more likely to under–report their meat and calorie intake than men (a fact which had been studied elsewhere).

We shouldn’t take self–reports about diet naively for granted. In fact, many argue that we should recognize that they’re damn well outright useless—and it is even well–documented here that demographic characteristics such as illness and gender influence an individual’s accuracy in reporting. Why should we take self–reports for granted in the case of drugs?

In fact, we have studies establishing that demographics correlate with accuracy in self–reported drug use as well.

A 2008 “Comparison between self-report and hair analysis of illicit drug use in a community sample of middle-aged men” determined that “Discrepancies between biological assays and self-report of illicit drug use could undermine epidemiological research findings. … Male participants followed since 1972 were interviewed about substance use, and hair samples were analyzed …. Self-report and hair testing generally met good, but not excellent, agreement. Apparent underreporting of recent cocaine use was associated with inpatient hospitalization for the participant’s most recent quit attempt, younger age, identifying as African–American or Other, and not having a diagnosis of antisocial personality disorder. … African–Americans in comparison to Caucasians who were urine positive were about 6 times less likely to report cocaine use when other factors are controlled for.” 

A 2005 study, “Race/Ethnicity Differences in the Validity of Self–Reported Drug Use: Results from a Household Survey,” found “evidence that compared with other groups, African Americans may provide less valid information on drug–use surveys. The findings suggest that African American respondents had significantly lower concordance rates. … Mediation was found in one model (cocaine) for one variable (SES), which may suggest some limited support for the cultural deficit model. Nevertheless, the finding that SES was not a consistent mediator of underreporting … [and] in general, none of the theories of mediation received strong support from this evaluation. Overall, the results replicate and extend a growing body of research suggesting that African Americans under–report substance use on surveys.” The 2005 study, in other words, found that even socio–economic status did little to diminish the impact of race in mis–reporting drug use on surveys. But as far as the raw numbers “without mediating effects entered, compared with African Americans, Hispanics have two and one–half times the odds of providing concordant responses … and Whites have over 25 times the odds of providing concordant responses.”

In fact, the findings of these study weren’t even new. All the way back in 1992, a study found that “ … intravenous drug users who were black or whose primary drugs of choice were injected cocaine and crack were more likely than other groups to misrepresent their current drug use status.” In 1994, a study of “The validity of drug use reports from juvenile arrestees” found that “Race/ethnicity [was] the most important predictor of cocaine use disclosure among those testing positive for this drug.” Yet, “Comparing the validity of self-reported recent drug use between adult and juvenile arrestees” finds that “adult arrestees are even more inclined to underreport their recent use of illicit drugs [than youth].”

And even beyond variability in the accuracy of self–reported drug use, The Department of Justice notes an important fact about variability in the self–reports themselves—it turns out that (in 1995) even if black and white respondents did in fact admit to using drugs at equal rates, they weren’t admitting to using the same amount of drugs“Among black drug users, 54% reported using drugs at least monthly and 32% reported using them weekly. Such frequent drug use was less common among white drug users. Among white users, 39% reported using drugs monthly and 20% reported using them weekly.” The pattern can still be found in the data from 2011 (chart taken from here), where the race ratio of admission of illicit drug use in lifetime begins at 17 whites for every 15 blacks—drops to 14.9 whites for every 15 blacks when considering admission of drug use in the past year—and finally inverts to 9 whites for every 11 blacks when admission of drug use in the past month is considered. The 2011 data doesn’t mention admission to drug use in the past week, where the ratio would likely invert even more so. And these are the same reports we have good reason to believe African–American respondents less frequently report accurately to in the first place.

If self–reports are a terrible way to estimate actual rates of drug use, then, do we have anything better?

It turns out that we do. While also imperfect, one of the most reasonable methods that we do have of estimating the racial breakdown of drug use in the general population is by looking at data on the racial breakdown of admissions to hospitals—and subsequent medical reports in cases of death—for illicit drug use, which data is recorded each year by the Substance Abuse and Mental Health Services Administration (SAMHSA), a branch of the U.S. Department of Health and Human Services.

In 1994, reports found that amongst white patients admitted to emergency room visits in cases involving drug use, 14% mentioned the use of cocaine, while 8.4% mentioned the use of heroin, and 6.8% mentioned the use of marijuana. Amongst black patients, these numbers change to a whopping 54.5% for cocaine, 18.4% for heroin, and 10.7% for marijuana. Numbers for Hispanic patients are inbetween the white and black rates for cocaine (26.5%), more similar to black patients’ reported menton of heroin (18.7%), and more like white patients’ reported mention of marijuana (6.2%). Across the years of 1988–1994, an average of about 33,000 white emergency room visitors mentioned use of cocaine per year. Meanwhile, in the same years, an average of about 59,000 black emergency room visitors mentioned cocaine. Very crudely, if whites were about 75% of the population and blacks were about 12%, for a population of ten million this would mean that about 0.4% of the white population of 7.5 million and about 4.9% of the black population of 1.2 million were using cocaine—more than a tenfold difference. 

For heroin? An average of 18,000 white visitors per year mentioned it, compared to 17,500 black visitors. Plugging our simplified numbers back in, this would mean about 0.24% of the white population and about 1.46% of the black population used heroin across this period of years—more than a sixfold difference. An average of 11,250 white visitors mentioned marijuana, compared to an average of 8,250 black visitors—0.15% of the white population versus 0.688% of the black population—a 4.5 times larger percentage of the black population. In 1995, medical examiners in cases of death reported cocaine (see table 42) in 32.8% of deceased whites, compared to 69.6% of deceased blacks. Cocaine was reported in deceased Hispanics by medical examiners inbetween the white and black rate, at 55%; but heroin was highest for deceased Hispanics of all, at 56% (compared to 44.3% for deceased whites and 43.8% of deceased blacks). Across the years of 1987–1995 (see table 43), the Arrestee Drug Abuse Monitoring Program found that an average of 33.3% of white arrestees tested positive in urinalysis for cocaine, whereas an average of 62.6% of black arrestees did.

In 2011, we can update these statistics to the following (with the caveat that in about 15% of emergency room visits, race is unknown): Of 505,224 ER visits for cocaine, 185,748 (36.7% of total) were white and 236,089 (46.7% of total) were black—a 27% increase. Altogether, of 1,252,500 ER visits for illicit drug use (including alcohol), 634,593 (50.7% of total) visitors were white, and 384,317 (30.7% of total) were black—a much larger black–white ratio than the 12%–77% ratios African–Americans and Caucasians respectively compose of the general population. While it is difficult to nail an exact estimate of the racial breakdown of drug use throughout the general public with these numbers, they are most certainly better than rates of self–report, and they most definitely indicate that rates of drug use are in fact higher among African–Americans in general (and for cocaine in particular). Yet, even if African–Americans are simply more likely to use drugs in irresponsible ways resulting in medical problems or death, this too would suggest a very high probability that these individuals are likely using drugs in more generally dangerous—as well as publically visible—ways that would either justify, or help explain, why a higher percentage end up arrested.

Methamphetamine is one drug for which the vast majority of self–reported use, arrests, and hospitalizations are of white users—just 5.6% of black visitors who were admitted to emergency rooms for drug–related issues mentioned it in 1994, for example, compared to 70% of white visitors. In fact, this relatively corresponds to the rates of arrests for methamphetamine: “In 2006, the 5,391 sentenced federal meth defendants [nearly as many as the 5,619 crack defendants!] were 54% white, 39% Hispanic and 2% black.” Furthermore, “the federal methamphetamine–trafficking penalties … are identical to those for crack.  [Yet] no one calls the federal meth laws … anti–white.”

According to a 2011 report to Congress on the impact of mandatory minimum policies on federal sentencing, “Approximately two–thirds of the 23,964 drug offenders in fiscal year 2010 were convicted of an offense carrying a mandatory minimum penalty. More than one–quarter (28.1%, n=4,447) of drug offenses carrying a mandatory minimum penalty involved powder cocaine, followed by crack cocaine (24.7%, n=3,905), [and then] methamphetamine (21.9%, n=3,466)…. The application of mandatory minimum penalties varies greatly by the type of drug involved in the offense. For example, in fiscal year 2010, a mandatory minimum penalty applied in 83.1 percent (n=3,466) of drug cases involving methamphetamine.  Crack cocaine (82.2%) and methamphetamine cases (83.2%) had the highest rates of offenders convicted of an offense carrying a mandatory minimum penalty.” If racism were the cause for the crack–powder cocaine sentencing disparities—which we have already seen is historically misleading anyway—why would racist policymakers turn around and then institute equally severe penalties for a drug overwhelmingly used by whites? The report further notes that “The average sentence for methamphetamine offenders who remained subject to a mandatory minimum penalty at the time of sentencing … was 144 months, which is the highest average sentence for any drug type.” Even further, “Black methamphetamine offenders convicted of an offense carrying a mandatory minimum penalty and subject to the mandatory minimum at sentencing had the lowest sentences, on average, of any racial group (131 months)”—compared to 143 months for whites. Methamphetamine would seem to be an excellent case study for testing whether we treat drugs like cocaine more seriously as an excuse for jailing African–Americans, or simply because we treat hard drugs in general seriously—regardless of who uses them most.

_______ ~.::[༒]::.~ _______

[1] It should go without saying that the question I am interested in here is not whether the drug war is an effective policy in general—only whether it’s enforcement is, either in intent or in practice, “racist”. One could perfectly well accept the conclusion that the drug war is not “racist” and still believe that its repeal would be beneficial for black and white Americans alike—I take no position on this question here, except to note that the commonly cited “decriminalization” policies in Portugal treat drug use as a non–criminal, medical health issue, but did not decriminalize drug dealing (in drug policy debates, this is in fact the definition of the word “decriminalization;” but this technical distinction is typically lost on the general reading public, and frequently left unexplained in articles addressing the subject).  Furthermore, correlation–causation questions are also more complicated than often assumed where the question of what impacts Portugese drug policy changes have had per se is concerned.

[2] Smack: Heroin and the American City by Eric C. Schneider, p.130–131; p.133

[3] “Organized Crime and Illicit Traffic in Narcotics,” Hearings before the Permanent Subcommittee on Investigations of the Committee on Government Operations, United States Senate (Washington, DC: U.S. Government Printing Office, 1964), 760.

[4] “Mark T. Southall, Leader in Harlem,” New York Times, 30 Jun. 1976, 35; “Mark Southall Dead, Former Assemblyman,”New York Amsterdam News, 3 Jul. 1976, A1.] Amongst his requests was “mandatory prison sentences for convicted dope pushers….” [“Southall Hits Drugs In Harlem,” New York Amsterdam News, 16 Dec. 1961, 13.

[5] “Dempsey Gratified in His Anti-Dope Drive,” New York Amsterdam News, 1 Sept. 1962, 23.

[6] “Harshest in the Nation: The Rockefeller Drug Laws and the Widening Embrace of Punitive Politics,” by Jessica Neptune

[7] Sundiata Acoli’s August 15, 1983 testimony in United States v. Sekou Odinga et al., in Sundiata Acoli’s Brinks Trial Testimony, a pamphlet published by the Patterson (New Jersey) Black Anarchist Collective, p. 21.

[8] Kes Kesu Men Maa Hill, Notes of a Surviving Black Panther: A Call for Historical Clarity, Emphasis, and Application (New York: Pan-African Nationalist Press, 1992), p. 71; Dhoruba Bin Wahad, interviewed by Bill Weinberg, “Dhoruba Bin Wahad: Former Panther, Free at Last,” High Times 241 (September 1995), < HIGHTIMES.COM > Mag. html > [Accessed January 12, 1999]; Assata Shakur, op. cit., pp. 162-72; Clayborne Carson, in Struggle: SNCC and the Black Awakening of the 1960s (Cambridge: Harvard University Press, 1995), p . 298.

[9] Jack Newfield, “My Back Pages,” Village Voice, January 18, 1973

[10] Gilbert Osofsky, Harlem: The Making of a Ghetto: Negro New York, 1890–1930 (New York: Harper & Row, 1996), 115.

[11] Cary D. Wintz and Paul Finkelman, Encyclopedia of the Harlem Renaissance, vol 1. (New York: Routledge, 2004), 272.

[12] “Excerpts from Governor Rockefeller’s Message Delivered to the Opening Session of the Legislature” New York Times, 6 Jan. 1966, 16.

[13] Earl Caldwell, “Group in Harlem Ask More Police,” New York Times, 4 Dec. 1967, 1.

[14] Homer Bigart, “Middle-Class Leaders in Harlem Ask Crackdown on Crime,” New York Times, 24 Dec. 1968, 46.

[15] Richard Reeves, “Survey Confirms Politicians’ Views of Attitudes of Ethnic-Group Voters,” New York Times, 25 Oct. 1970, 1.

[16] Maurice Carroll, “After Crime, Big Issues Are Prices and Fares,” New York Times, 17 Jan. 1974, 36; David Burnham, “Most Call Crime City’sWorst Ill,” New York Times, 16 Jan. 1974, 113; Nathaniel Sheppard, “Racial Issues Split City Deeply,” New York Times, 20 Jan. 1974, 1.

[17] Leroy Clark, “What Does Civil Liberties Mean in the Drug Context?” Amsterdam News, January 13, 1973.

Are African–Americans Disproportionately Victimized by Police?

In general, the most obvious and stubbornly ignored problem with “anti–racist” sociological analysis is that the treatment of a given demographic is simplistically judged to be fair or unfair according to one shallow dimension: whether or not treatment of that demographic matches that demographic’s representation of the population. The basic problem with this sort of reasoning is obvious to anyone who looks at it for a second without blinders: imagine someone making the argument that there is rampant bigotry and discrimination against men in the United States justice system, and that they drew this conclusion solely because men represent 90% of the U.S. prison population despite being only 50% of the general population. No one would fail, for a second, to notice that the crucial step missing in this argument would be: “Fine, but how much of the violent crime is that male 50% of the population committing? Who says it isn’t somewhere around 90%?” And no one would consider this question to constitute bigotry against men. As a man, no one will consider me “self–hating” for expressing the opinion that this would be a reasonable and justified question. No one will worry that talking about or acknowledging this statistic will perpetuate “stereotypes”, say, that men are as a rule “simply more violent” than women—in fact, no one will be even slightly opposed to considering the possibility that this could turn out to be empirically true. No one will take offense to it.

 _______ ~.::[༒]::.~ _______

One of the most common memes in popular consciousness regarding the relationship between crime, police and race is expressed by the phrase, “driving while black.” The phrase sardonically implies that black drivers are arrested so disproportionately to their numbers that this can only be because operating a vehicle while black is literally a crime.

Not everyone who uses the phrase is aware of its origins. Writing for NewsOne, Al Sharpton recalled: “In the 1990s … I was among those in the civil rights leadership that raised the country’s awareness on the outrageous policing practice of racial profiling, which systematically singled out minority drivers and disproportionately pulled them over on America’s roadways. … Through my organization … we were able to show that motorists of color were overwhelmingly harassed (…) [and w]hile pushing for reforms, we popularized the phrase … “driving while Black.””

The case Sharpton refers to here took place on the New Jersey Turnpike (see this article from 1993). As it is one of the few cases where we have a comprehensive and objective way to compare actual racial differences in behavior to disparate treatment by police, it serves as an excellent starting point for our discussion, and an excellent demonstration of how deeply utterly unfounded accusations of racism can become implanted in popular awareness as unquestionable truth. In 1995, a New Jersey state judge threw out charges against fifteen black drivers who, in the judge’s judgment, had been pulled over without proper cause. During the ensuing trial it emerged that, on a 26–mile long stretch on the south end of the New Jersey Turnpike, minorities accounted for a full 46 percent of drivers stopped for speeding—and the case for lawsuits against the New Jersey police was bolstered.

Of course, the crude opening assumption held within the trial was the same held today in equivalent contexts by “anti–racists”:  if black drivers are 13% of the population and 23% of those arrested for speeding, then this can only mean police are deliberately singling out black drivers because of their race. As the trial came to a close, New Jersey officials were ordered to collect more data on rates of speeding on the Turnpike—which the Justice Department clearly expected to bolster its charges of profiling. So the Public Service Research Institute used specially designed radar gun cameras to catch automated photographs of speeders. In the end, 38,747 photographs were captured, and of these only 26,334—in which at least two out of three evaluators (who were not told which subjects had or had not been speeding) were able to agree on the race of the photographed individual—were used for the analysis. What did it find?

In the southern segment of the turnpike which had been the primary subject of the previous lawsuit and in which the bulk of the stops had occurred, where “speeding” was defined as driving at least 80mph in a 65mph zone, 2.7% of black drivers were found speeding compared to 1.4% of white drivers. In other words, black drivers, who were 16% of drivers on the turnpike, were also 25% of the speeders—and when subjects were restricted to a speeding rate of at least 25 miles above speed limit (that is, 90mph or more), the disparity was even greater.

The Justice Department’s pathetic response was to block the release of the study, making a handful of obviously desperate arguments against the validity of the findings: the main argument offered by Mark Posner, who asked the state attorney general’s office to withhold the study, was that results may be skewed by the removal of photos affected by windowshield glare from the analysis. Why would anyone expect windowshield glare to affect races differently? And even if it did, why would anyone expect it to affect white drivers in particular several times more often than it affects blacks?! Such spurious objections were finally abandoned, and the study was finally allowed to be released—but not until 2005; three full years after it had been completed (read it here).

From the abstract: “Racial profiling is often measured by comparing the racial and ethnic distribution from police stop rates to race and ethnicity data derived from regional census counts. However, benchmarks may be more appropriate that are based on … the population of traffic violators. … The results revealed that the racial make–up of speeders differed from that of nonspeeding drivers and closely approximated the racial composition of police stops. Specifically, the proportion of speeding drivers who were identified as Black mirrored the proportion of Black drivers stopped by police.” That is what the most thorough analysis of the most objective possible data—recorded by automated machines, which can’t very plausibly be accused of racist bias; and classified so thoroughly that more than 12,000 out of 38,000 uncertain photographs were thrown out just to be safe—found. Black drivers were being stopped more frequently because they were speeding more frequently—period. Had the Public Service Research Institute not been ordered to collect this information after the fact, the claim that the New Jersey Police Department was practicing egregious racial profiling on the Turnpike would most likely have continued to stand without serious challenge.

 _______ ~.::[༒]::.~ _______

Similarly, in all the discussion of the disparate treatment of minorities by police, what almost every conversation systematically lacks is any contextual awareness of how much crime minorities are or aren’t responsible for in the first place. Now, as then, most analysis of the relationship between race and crime simply compares a groups’s statistical outcomes to their representation of the population—which completely leaves aside the only relevant question: how does the group comprising that percentage of the population behave on average? Again, no one would ever fail for a moment to keep this question in mind if the topic were the crime or imprisonment rates for men in general. 

And the truth is that African–Americans are (like men) responsible for an extremely disproportionate amount of crime in the United States in general. Contrary to a popular impression, we are not reliant solely on arrest rates themselves to determine the relative rates of criminal offending here—so police bias does not enter as a confounding figure into the calculation. A primary source for reliable data here is, in fact, in reports from victims and witnesses themselves.  Using the National Archive of Criminal Justice Data to analyze data from the National Crime Victimization Survey (NCVS), this report[1] finds that: “For the most recent report, the government surveyed 149,040 people about crimes of which they had been victims during 2003. They described the crimes in detail, including the race of the perpetrator, and whether they reported the crimes to the police. The survey sample, which is massive by polling standards, was carefully chosen to be representative of the entire US population. By comparing information about races of perpetrators with racial percentages in arrest data from the Uniform Crime Reports (UCR) we can determine if the proportion of criminals the police arrest who are black is equivalent to the proportion of criminals the victims say were black.

UCR and NCVS reports for the years 2001 through 2003 offer the most recent data on crimes suffered by victims, and arrests for those crimes. Needless to say, many crimes are not reported to the police, and the number of arrests the police make is smaller still. An extrapolation from NCVS data gives a good approximation of the actual number of crimes committed in the United States every year. The NCVS tells us that between 2001 and 2003, there were an estimated 1.8 million robberies, for example, of which 1.1 million were reported to the police. The UCR tell us that in the same period police made 229,000 arrests for robbery. Police cannot make an arrest if no one tells them about a crime, so the best way to see if police are biased is to compare the share of offenders who are black in crimes reported to the police, and the share of those arrested who are black. Figure 1 compares offender information to arrest information for all the crimes included in the NCVS. For example, 55 percent of offenders in all robberies were black, 55.4 percent of robbers in robberies reported to police were black, and 54.1 percent of arrested robbers were black.”

What this implies for the justification for supposing police disproportionately target minorities for arrest is surprising: “For most crimes, police are arresting fewer blacks than would be expected from the percentage of criminals the victims tell us are black (rape/sexual assault is the only exception). In the most extreme case, burglary, victims tell police that 45 percent of the perpetrators were black, but only 28 percent of the people arrested for that crime were black. If all the NCVS crimes are taken together, blacks who committed crimes that were reported to the police were 26 percent less likely to be arrested than people of other races who committed the same crimes. These figures lend no support to the charge that police arrest innocent blacks, or at least pursue them with excessive zeal. In fact, they suggest the opposite: that police are more determined to arrest non–black rather than black criminals.”

While it may appear to be a glitch in this analysis that “more crime victims report crimes to police when the criminal is black than when he is of another race”, the reason for this appears to be that: “NCVS victims are more likely to call the police about more serious crimes within the same category—for example, if a robber had a gun or a knife. According to NCVS victims, blacks are nearly three times more likely than criminals of other races to use a gun and more than twice as likely to use a knife. Therefore, even within the same crime categories, blacks are committing more serious offenses—which makes it even more striking that police are less likely to arrest them than criminals who are not black.”

To be clear, these facts hold regardless of the reason why they are true—no implication that blacks are ‘more dangerous’ when all else is held equal is necessary in order for this point to stand on its own irrefutable merits. One partial explanation of higher crime rates in black communities, for example, is surely that most crimes are committed across the peak ages of 15–25—and the African–American population skews closer towards this younger demographic than others (as of 2011, more than 50% of the African–American population was under 18 years of age). Even still, if—as I could accept—there is no independent “race factor” in perpetration of crimes whatsoever, and the real explanation of “higher African–American criminality” is in some incidental factor like the younger age structure of the African–America population, then it would follow that the explanation for disproportionate African–American encounter with police, too, is in the younger age structure of the African–American population (or whatever other factor or combination of factors might be deemed most relevant)—and not in discrimination on the basis of race.

In closing, I note this study from 2013: No evidence of racial discrimination in criminal justice processing: Results from the National Longitudinal Study of Adolescent Health — “One of the most consistent findings … is that African American males are arrested, convicted, and incarcerated at rates that far exceed those of any other racial or ethnic group. This racial disparity is frequently interpreted as evidence that the criminal justice system is racist and biased against African American males. Much of the existing literature purportedly supporting this interpretation, however, fails to estimate properly specified statistical models that control for a range of individual-level factors… …This racial disparity … was completely accounted for after including covariates for self–reported lifetime violence and IQ.”

 _______ ~.::[༒]::.~ _______

Thus, a 2002 study in the American Journal of Public Health found that from 1988 to 1997, the death rate due to what the CDC calls “legal intervention” was three times higher for blacks than for whites: “Of the 5486 total deaths due to legal intervention during the 19–year period 1979 to 1997, 5330 decedents (97%) were male. Whites accounted for 3447 deaths (63%), Blacks for 1885 deaths (34%), and “others” for 154 deaths (3%). … mortality rates for both White and Black males were highest in the 20–to 24–year-old age group … [which] roughly parallels the age distribution of death rates for homicides due to all causes, which peaks at 15 to 24 years….” Of course, once again, no one thinks to take a study like this as evidence for systematic bias against all men; and few would doubt—much less consider it sexist—that differences between male and female behavior during interaction with police is a major factor in this statistic.

However, once again, when we control for the actually relevant data—violent crimes committed—as a proxy measurement for interactions with police (which is, as we have seen, well empirically supported) we find the following:

2012 Violent Crime Rate (per 100,000):
Non–Black: 122.7
Black: 465.7

2012 Deaths By “Legal Intervention” (per 100,000):
Non–Black: 0.15
Black: 0.32

(Sources: 1, 2)

Placing the two figures together, we get figures for deaths by “legal intervention” per violent crime.

Non–Black: (122.7/.15) =
1 non–black death by “legal intervention” per 818 violent crimes committed.

Black: (465.7/.32) =
1 black death by “legal intervention” per 1,455 violent crimes committed.

Dividing the first number by the second gives us the answer to the question, “how many violent crimes did it take to result in one death by ‘legal intervention’ for blacks and non–blacks?” And once again, once we control for the figure that is actually relevant—the number of violent crimes committed, which justifies the number of encounters with police—we find that it is in fact non–blacks who are most likely to have a fatal encounter with police: 1.78 times more likely, in fact—almost twice as likely. In other words, any given black individual who commits a violent crime has a 0.0687% chance of dying in an encounter with police—while any given non–black individual who commits a violent crime has a 0.122% chance of dying in an encounter with police. The non–black risk of death by “legal intervention” is, in other words, an additional 0.0535% larger. [See more] Again, the term “non–black” is used here because “white” and “Hispanic” appear to be lumped together by this data. But for the “white” risk of death by “legal intervention” to be lower than the black risk of death by “legal intervention,” the “Hispanic” risk of death by “legal intervention” would have to be extremely larger than both the white and black risks of death by “legal intervention” in order for the numbers to balance out. Given that almost all statistics conflate “whites” and “Hispanics,” this is an extremely difficult question to resolve, but there are a variety of reasons why it is extremely unlikely (one of these will be discussed below). [See more]

Similarly, extrapolating from data peculiar to New York, Heather MacDonald writes that: “…blacks in New York are less likely than whites to be killed by the police when their higher rates of using mortal force against the police are taken into account. In 2011, for example, New York officers fired at 41 suspects and killed nine of them — an astonishingly low number in light of New York’s population and the size of its police force. … Blacks were 22% of those fatalities; whites were 44% of them. Yet blacks were 67% of all suspects who fired at the police; no white suspect fired at the police. … This pattern holds nationally. The black percentage of suspects killed by the police, historically around 29%, is lower than one would expect based on the best available data on those who represent a mortal threat to the police … In 2013, for example, blacks made up 42% of all cop killers whose race was known, even though they are only 13% of the nation’s population.” Once again, measuring the actually relevant data not only reveals no indication that black suspects are treated unfairly in response to their behavior, but that if anything, this response is less than actual black rates of violence would render statistically justifiable.

Two other conclusions from the aforementioned study, “Trends in Mortality Due to Legal Intervention in the United States,” bear notice: (1) “Legal intervention is an uncommon external cause of death, accounting for roughly 1% to 2% of all homicides.” And: (2) “Absolute numbers of yearly deaths due to legal intervention, as well as rates of death for all age– and race–specific categories examined, decreased significantly from 1979 to 1988 and did not display statistically significant trends thereafter. This decline roughly parallels a concurrent decline in the overall homicide rate during this period.” In other words, as crime goes down, so does the number of criminal suspects who die in encounters with police—this should be obvious, but it puts lie to any impression that instances of “police brutality” are either on the rise (due to proper causes or not), or unrelated to the actions of violent criminals themselves.

One hardly irrelevant contributing factor to this declining homicide rate is, in fact, the incarceration rate—see the 2007 study, From the Asylum to the Prison: Rethinking the Incarceration Revolution from Bernard E. Harcourt, Professor of Law at the University of Chicago. Page three includes the following chart:

And states—“Th[is] relationship between aggregated institutionalization and homicide rates … is remarkable…. Later…, I test and quantify the relationship and find that, … holding constant the leading structural covariates of homicide (poverty, demographic change, and unemployment), the relationship is large, statistically significant, and robust.”

Notice: because most violent crime—especially murder—takes place within, rather than between, racial groups, when black criminals are stopped it is their primarily black victims who primarily gain (this point may be brought up again in a later discussion of New York’s stop and frisk policies). It should be hard to see in principle why “racism” would explain police putting themselves in harm’s way to act for the benefit of violent black criminals’ primarily black victims. Wouldn’t the truly “racist” move be to move somewhere else and ‘avoid the ungrateful bastards’ entirely?

_______ ~.::[༒]::.~ _______

All of this data coincides, as well, with a relatively recent experimental study which tested, for the first time, how quick to press the trigger officers would be on suspects of various racial demographics ‘on the field.’ This study is notable and important for two reasons: first, the officers in this study were told that they were being tested for shooting errors (which meant either shooting unarmed suspects, or failing to shoot armed suspects) and speed—and were given no reason whatsoever to believe they were being tested for racial bias (and as the study authors further note, “there were no racially charged events or news stories in the area at the time.”)

Second, all previous findings have been based on what are known as Implicit–Association Tests. In short, these tests work by showing a screen with something like ‘White’ on the left–hand side and ‘Black’ on the right–hand side, then flashing positive or negative words in the middle and requiring participants to press either a left or right arrow as quickly as possible. The speed at which participants slide positive or negative words to the right or left is supposed to demonstrate the degree to which they subconsciously view whites or blacks in a positive or negative way.

A variety of conceptual and empirical problems plague the question of how well the results of IATs are actually supposed to apply to the real world: first, it is unclear whether IATs actually measure bias held personally by the participant himself—it could be just as likely that in split–second scenarios, forced to apply a word either to one category or the other, the participant is simply demonstrating knowledge of a cultural stereotype rather than any sort of ingrained personal conviction in its truth. I have no doubt, personally, that if I was given a screen with the word “Blonde” on one side and “Brunette” on the other and forced to slide “Dumb” to either the left or the right, I would select “Dumb Blonde” over “Dumb Brunette”—simply because I am aware that the former’s existence. I am extremely skeptical that this would suffice to show that I would unfairly judge the same woman exhibiting the same identical behavior as less intelligent if she were blonde rather than brunette. (Note: even though the stereotypical “dumb blonde” is attractive, and attractiveness is in fact statistically associated with higher IQ, there may nonetheless be an empirical basis behind the “dumb blonde” stereotype just as there is for the stereotype of the dumb athlete, even though athletic capability is statistically associated with higher IQ as well. These questions are more complex than either people who too readily dismiss, or too readily accept, common stereotypes usually realize.)

Next, what if these perceptions are actually justified? I can recognize—as I do—that it is rational to associate men more closely with violent crime than women because men commit a massive majority of violent crimes, even though a majority of men do not commit violent crimes and it therefore makes no sense to expect any given man to be more probably violent than not on the basis of this fact. In other words, I can recognize that it makes sense to associate men with more “violent” words without any implication following that if all else is held equal, I am going to be biased against a man exhibiting identical behavior to a woman in some given scenario—say, that I would expect a man knitting while wearing an egg–white face mask watching Days of Our Lives in a room full of cats to be intrinsically more prone to violence than a woman doing the same, or a female police officer with a mean, orcish face to be less violent than a similar–looking male cop. Likewise, it should be just as possible to associate criminality with African–Americans in exactly the same way without it following that I am biased to see any African–American exhibiting identical behavior in an identical scenario as more inherently prone to violent behavior than a Caucasian (say, expecting the economist Thomas Sowell to be more violent than Thomas Picketty). (See also the second paragraph of the first footnote, which starts: “Criminologists estimate that seventy percent of all crimes are committed by just seven percent of the offenders….”)

Third, even if implicit–attitude tests do measure personally held beliefs and not the mere awareness of stereotypes, and even if these beliefs are irrationally overextended by those who hold them to individuals in irrelevant situations rather than merely believed as true generalizations (which people fully well understand carry exceptions), it is still true that people can very easily overcome these split–second biases—even within the confines of the IAT itself.

Quoting from Alfred Mele and Joshua Shepherd in Situationism and Agency

“Xiaoqing Hu and colleagues (2012) had participants take an IAT and then take it again. On the second trial, they separated participants into four groups. Group 1 simply repeated the IAT to test for the influence of task repetition. Group 2 repeated the incompatible response block of the IAT three times to test for the influence of practice. Group 3 was explicitly instructed to speed up their responses in incompatible response situations. Group 4 was told the same thing as group 3, and they were also given more time to practice; they repeated the incompatible response block three times, just like group 2. … If a conscious intention to speed up responses is to be effective, one would expect group 3 to respond faster than group 1 in the incompatible response conditions. One would also expect group 4 to respond faster than group 2 in the incompatible response conditions. This is what happened (Hu et al. 2012, p. 3, Table 1). Group 3 improved response time by 168 ms (from 902 ms to 734 ms), while group 1 improved response time only by 45 ms (from 950 ms to 905 ms). Compared with group 2, group 4 significantly improved response time as well. Practice certainly seemed to help: group 2 improved response time by 80 ms (from 922 ms to 842 ms). But group 4 improved response time by 215 ms (from 858 ms to 643 ms). … That both a conscious intention and training in speeding up responses had large effects on behavior constitutes important evidence in favor of our optimism. Participants were, in effect, asked to control the influence of implicit attitudes on behavior at a very rapid time scale—less than a second. Participants informed about the influence of implicit attitudes on behavior were able to successfully control the influence of these implicit attitudes. This directly counters the common assumption that implicit attitudes influence behavior in ways not susceptible to conscious control. Knowledge about effects on agents that normally fly under the radar of agents’ consciousness can give people the power to weaken those effects. The fact that relevant knowledge can do this at such rapid time scales is striking, and it speaks against a pessimistic perspective on agential control.”

Thus, the findings of this study from 2012 put to the test, for the first time, the premise that previous tests of the racial biases of police officers merely held as an unquestioned assumption: how does this any of this apply in practice once we actually make our way out into the field? And the results: “In all three experiments using a more externally valid research method than previous studies, we found that participants took longer to shoot Black suspects than White or Hispanic suspects. In addition, where errors were made, participants across experiments were more likely to shoot unarmed White suspects than unarmed Black or Hispanic suspects, and were more likely to fail to shoot armed Black suspects than armed White or Hispanic suspects. In sum, this research found that participants displayed significant bias favoring Black suspects in their decisions to shoot.” The paper also references a 2007 study by Joshua Correll, ‘The influence of stereotypes on decisions to shoot,’ of which it says: “ … unlike civilian participants, the sample of police officers showed no significant racial bias in their errors (they did not mistakenly shoot unarmed Black suspects or fail to shoot armed White suspects disproportionately). Correll and his colleagues suggested that: “by virtue of their training or expertise, officers may exert control over their behavior, possibly overriding the influence of racial stereotypes….””

Where the 2012 study improved over the 2007 study was that the former, for the first time, ran this experiment with an actual weapon, rather than asking participants to press a button that simply says “Shoot” or “Don’t Shoot” in a video game—as the authors write in explanation, “Firing a handgun is a complicated endeavor; at minimum, it involves un–holstering, bringing the weapon to a ready position, aligning sights with the target, and ultimately pulling the trigger. Pushing a button is a simple reflex, dramatically different to the complex process involved in shooting a firearm. Furthermore, there is no active difference between pressing a “shoot” and a “don’t shoot” button. The same action is required for a decision to shoot and a decision not to shoot, whereas in field encounters a decision not to shoot is marked by inaction.” This should very obviously recall the points just made regarding the speed at which participants in IAT experiments are found capable of adjusting and controlling their “biased” responses: the very existence of a handgun, in the real world, immediately weakens any implications that are supposed to extend from these findings by inherently increasing the time for a decision. Adding only the increase of complexity brought out by firing a handgun rather than making a simple button press was enough to change the findings of the experiment even more. How much more would it impact these dynamics to factor in extended interaction with the actual behavior of a real, live suspect?

In any case, the study found that “[active duty police] participants [in experiment 3] took significantly (1.34 s) longer to shoot Black suspects than White suspects … … we calculated that [active duty police] participants were 25 times less likely to shoot unarmed Black suspects than they were to shoot unarmed White suspects …. There was no significant difference between the likelihood of shooting unarmed Hispanic suspects and unarmed White suspects. … [active duty police] participants were equally likely to fail to shoot armed White, Black, and Hispanic suspects at each level of difficulty.” Critically, the hesitation towards Black suspects became greater—not lower—as uncertainty rose in the more difficult experiments: “There was … a significant interaction between suspect race/ethnicity and scenario difficulty; participants were most likely to shoot unarmed White suspects in journeyman [the highest difficulty–level] scenarios.” Not only does this, the best experimental data so far available, converge with the analysis of the actual empirical data presented above; it also adds strong support to the conclusion that the “Hispanic” death–per–crime rate is not illegitimately inflating the high “white” death–per–crime rate presented there—and implies that the explanation for this disparity in the data is that police simply do hesitate more to fire at armed black suspects in particular.

In conclusion: the best empirical evidence and the best experimental data agree. 

_______ ~.::[༒]::.~ _______

If police hesitate more to fire at armed black suspects, why might that be? The authors of the 2014 study quote a 1977 study (which already then found that police shootings of black suspects were proportionate to black suspects’ disproportionate shooting of police officers) giving the obvious answer: “ … police behave more cautiously with Blacks because of departmental policy or public sentiment concerning treatment of Blacks….”

In other words, when a white suspect is shot, the police department doesn’t face the prospect of accusations of racism—so there is quite simply less to worry about. And cases which have been widely covered in the media over the past handful of years demonstrate clearly just how powerful these accusations can be, even when the evidence for “racism” hangs on demonstrably slender threads. In more than one incident, an individual who merely defended themselves against assault’s entire life changed in such a way as to render any return to a normal life impossible due to death threats (and worse) because the suspect who assaulted them was black, and accusations of racism therefore entered the picture and permanently clouded all future evaluation of the actual evidence or facts of the case.

These instances show very clearly that the facts bear very little weight whatsoever once accusations of racism are made. Meanwhile, a variety of cases of at least seeming acts of police brutality against white victims went almost wholly ignored, except in small pockets of conservative media where they were used instrumentally as counter–points to the prevailing racial narrative—and even here, they weren’t being investigated out of any genuinely intrinsic interest.

I’ll restrict myself for now for the sake of brevity to a brief discussion of the facts in the case involving the eighteen–year–old Michael Brown and Officer Darren Wilson which sparked the 2014 riots in Ferguson, Missouri and elsewhere. The original narrative would have it that events took place in a way that resembled something like this: Wilson was a police officer going about an ordinary day just like any other when he was suddenly so incensed to see a well–educated, upstanding black man walking the street that he dove out of his vehicle in rage—at which point Michael Brown dropped to his knees and held his hands in the air in surrender—at which point Wilson, unmoved by the display, proceeded to shoot Brown in the back until he collapsed, and then continued firing rounds into the lifeless corpse solely to vent his unrelenting and baseless hatred of upstanding, college–bound African–Americans.

Reality couldn’t have been more different.

The incident started when Brown stole from a local convenience shop—cameras caught him strong–arming the clerk:

Next, Darren Wilson encountered Michael Brown jaywalking in the middle of a street, and stopped to ask him to move to the sidewalk. Forensic evidence confirmed that Brown’s response to this was to attack Wilson by reaching through the car window and trying to grab Wilson’s firearm: a shot was fired inside the vehicle, confirming Wilson’s account of a struggle; dust on Brown’s right hand confirmed that his hand had been within very close range of the shot (which could not have taken place later); Brown’s DNA was found inside the vehicle. Once this attempt was unsuccessful, Brown fled and Wilson pursued (as this was now, after all, a case of assault, if not attempted murder). Finally,

When Wilson arrived within range of Brown and ordered him to freeze, by Wilson’s account as well as that of the most credible witnesses, Brown paused, made some sort of gesture, and then charged in something like a football tackle pose back towards Wilson’s direction. The U.S. Department of Justice’s report notes that “Brown’s blood in the roadway [as well as the pattern of shell casings] demonstrates that Brown came forward at least 21.6 feet from the time he turned around toward Wilson”—once again, the forensic evidence conclusively supported Wilson’s account. Wilson wasn’t pursuing Brown when the fatal rounds were shot—he was strafing away in defense.

The witnesses who had claimed anything otherwise were all resoundingly discredited. Every witness whose statements were compatible with the irrefutable forensic evidence corroborated Wilson’s account of events. As the DOJ report concluded, “While credible witnesses gave varying accounts of exactly what Brown was doing with his hands as he moved toward Wilson … they all establish that Brown was moving toward Wilson when Wilson shot him. Although some witnesses state that Brown held his hands up at shoulder level with his palms facing outward for a brief moment, these same witnesses describe Brown then dropping his hands and “charging” at Wilson.”

_______ ~.::[༒]::.~ _______

Meanwhile, at least (for sake of brevity) one significant case in which a white suspect was killed under questionable circumstances—by a black cop, no less!—received effectively no attention whatsoever: Just two days after the shooting of Michael Brown, a black police officer was cleared of wrongdoing after shooting Dillon Taylor within mere seconds of a rushed encounter, when the officer suspected Taylor was reaching for a weapon when the latter moved his hands towards his waistband most likely for the simple purpose of pulling his pants up. Taylor later turned out to be unarmed, and a body camera captured the entire event on film. No national outrage followed. No riots took place. No buildings were burned. There wasn’t a single case where any black citizens were randomly attacked in retaliation or black protesters who were sympathetic to the white victim were lured into attacks by groups of whites or smashed with hammers while wearing ‘Stop Killing White People’ t–shirts for suggesting not to destroy unrelated businesses.

Unlike the case of Michael Brown, Dillon Taylor hadn’t robbed any stores or strong–armed any clerks, and he neither charged in the officer’s direction nor attempted to take his weapon away from him. In the very similar case of Tamir Rice, officers were called to the scene in response to reports that a child was walking around carrying and pointing what looked to all outside appearances to be a real weapon—and the officer who arrived at the scene fired hastily when Rice very clearly reached for that perfectly visible and obvious object. Unlike in the case of Tamir Rice, Dillon Taylor was not carrying even a replica of a weapon—and unlike the case of Tamir Rice, there isn’t even a Wikipedia entry I can link to here for further details behind the case of Dillon Taylor. The “Justice for Dillon Taylor” Facebook page has around 5,000 followers. The “Justice for Tamir Rice” page has around 8,000. The “Justice for Michael Brown” page has almost 30,000. Only one member of this group was proven by overwhelming forensic evidence to have attacked their killer fist—possibly in an attempt at murder—without provocation after committing an aggressive crime.

Yet, when the naive version of the story of the Michael Brown shooting was finally refuted once and for all, the attitude of many protesters was represented by this statement to one reporter: “Even if you don’t find that it’s true, it’s a valid rallying cry … It’s just a metaphor.” Such is the nature of recent national incidents in which race was claimed to play a major role: it simply doesn’t matter, individually, whether or not any particular claim is actually true.

But the truth is that there is more, and not less, outrage when a black suspect is victimized—not only when comparing situations which are similar (Tamir Rice vs. Dillon Taylor), but even when comparing cases where the black suspect is in fact a violent aggressor and the white suspect is not (Michael Brown vs. Dillon Taylor). In turn, this outrage leads to increased public awareness when a black suspect is victimized, and relative public ignorance when a white suspect is the victim. Disproportionate outrage towards the victimization of black suspects is fueled by the perception that black suspects are disproportionately victimized by police. The perception that black suspects are disproportionately victimized is created through nothing other than the very existence of that same disproportionate outrage.

The perception of racism in policing is like ourobouros,
fueling its disproportionate outrage by consuming
the tail of its own disproportionate outrage.

_______ ~.::[༒]::.~ _______

[1] To be clear, the organization behind this report is American Rennaisance. I rely on it here solely because it is one of the few sources of discussion of the racial breakdown in victim reports I was able to find, and to reject this piece of data because of anything else would be to throw the baby out with the bathwater. Notably, in his rebuttal to it, even Tim Wise says nothing about the report’s discussion of victim reports—he only attacks further extrapolations from the data derived from them, and I require none of these other points for my much more limited purposes here.

While Wise’s response has further problems of its own and I fully endorse neither the original report nor Tim Wise’s critique of it, Wise does rightly note that: “Criminologists estimate that seventy percent of all crimes are committed by just seven percent of the offenders: a small bunch of repeat offenders who commit the vast majority of crimes. Since blacks committed roughly 1.2 million violent crimes in 2002, if seventy percent of these were committed by seven percent of the black offenders, this would mean that at most there were perhaps 390,000 individual black offenders that year. In a population of 29.3 million over the age of twelve, this would represent no more than 1.3 percent of the black population that committed a violent crime in 2002. [If blacks committed 1.2 million violent crimes in 2002, and 70 percent of these were committed by 7 percent of the offenders, then 30 percent were committed by the remaining 93 percent of offenders. 30 percent of 1.2 million offenses is 360,000 offenses. 360,000 represents 93 percent of 387,000. If the remaining 70 percent of offenses (840,000) were committed by 7 percent of the population, this means that these crimes were committed by 27,000 hardcore offenders (7 percent of 387,000)].” This point is entirely valid—and also irrelevant to anything I have argued or need for the purposes of my argument here.  As I wrote previously, “I can recognize—as I do—that it is rational to associate men more closely with violent crime than women because men commit a massive majority of violent crimes, even though a majority of men do not commit violent crimes and it therefore makes no sense to expect any given man to be more probably violent than not on the basis of this fact.”

Violence Against Women, And Violence Against Truth

Recently, I ended up in a discussion with a professor at the University of Oregon who shared this article from the feminist outlet Ms. Magazine titled, “Why Don’t We Talk About the Gender Safety Gap in the U.S.?”

The article quotes a recent U.N. report “documenting “alarmingly high rates” of gender-based violence against girls and women,” and it concludes with the following statements: “A formal commitment to gender equality in the law has yet to mean that men and women benefit equally from societal improvements like lower crime rates … or enjoy anywhere near parity rights to physical freedom and security. … women’s absorption of the gap’s costs continues to be largely taken for granted.”

There are two contexts in which this article quotes any actual figures pertaining to rates of violent crime. The first, and most emphasized context, is in discussion of who reports feeling safest. ““Most Americans,” researchers [behind a Gallup poll conducted in December of 2014] concluded, “continue to feel safe in their immediate communities, with 63 percent saying they would not be afraid to walk alone there at night.” However, the 63 percent number masks a difference: almost half of women, 45 percent, report not feeling safe, whereas 73 percent of men reported that they do [feel safe]. … [These] results lined up closely with European country results of that survey, which … found that 75 percent of men in Europe said they felt safe, compared to 55 percent of women. … these measures undoubtedly reflect different standards of safety, as well as commitments to women’s safety.”

Of course, feeling safe is no absolute measurement of actually being safe, and the article itself does at least half–heartedly takes a single second to notice this: “Men … may have an exaggerated sense of confidence about their safety and control…,” though it barely stops to consider the ramifications of this fact for the use of how safe men and women ‘feel’ as a measure for actual safety. In the one and only statement in the whole article to actually address the relative gender rates of victimization in violent crime, the article says this: “Historically, men were more likely to be the victims of violent crimes, however, according to the Department of Justice’s most recent crime report, between 2004 and 2013 rates of violent crimes against men and women reached equal levels of prevalence.”

What you’d never get a full impression of from reading this sentence is that, from 1980 to 2008, men were in fact a whopping 77% of all victims of violent crime. Historically and up until very recently, men have been the mass majority of those victimized by violent crime by far. Now, you might even get the impression that the article is telling you that “rates … reached equal levels of prevalence” because rates of violence against women have been increasing to match rates of violence against men. Certainly the article does nothing to clarify that this is not the case, and the alarmist tone (“deep structural gender inequities … continue to … perpetuate unconscionably high levels of socially tolerated gender-based violence”) most certainly lends to that impression, even if it isn’t said directly.

However, if this were the impression you gathered, you would be wrong.

First of all, rates of violent crime have in fact been decreasing all across the board.

And therein lies the clue to our “violent rates of crime … reach[ing] equal levels of prevalence.”

Let’s take a look at the actual data.

Violent Crime (per 1,000) –
2004: 25.5 (Female) / 30.2 (Male)
2013: 22.7 (Female) / 23.7 (Male)

Serious Violent Crime (per 1,000) –
2004: 8.4 (Female) / 10.6 (Male)
2013: 7.0 (Female) / 7.7 (Male)

Rates of crime are in fact dropping for both men and women across 2004–2013. They’re dropping relatively more for men than for women, but this hardly fits the narrative that “deep structural gender inequities … continue to marginalize women”—men are relatively more victimized by violent crime when it occurs in the first place (again: over recent decades, more than 3 out of 4 victims of violent crime have, in fact, been men). And as of 2013, those numbers still aren’t quite “equal.” Combining the two figures, we’ve gone from 6.9 more men per 1,000 victimized than women to 1.7 more men per 1,000 victimized than women. That’s still more men than women victimized. 

_______ ~.::[༒]::.~ _______

On the basis of all this, I predicted we would see that whether crime rates rise or fall, it is men who are most effected by the change. I settled on 1985–1990 and 1999–2001 as case examples of times when rates of violent crime rose.

The results of that search?

Across 1985–1990, crime rose from about 20,000 to about 25,000 total victims.

In 1985, there were 14,738 male victims and 4,707 female victims of violent crime.
In 1990, there were 19,128 male victims and 5,174 female victims of violent crime.

The number of male victims rose by 4,750. The number of female victims rose by 467.

Across 1985–1990, 90% of the increase in victims of crime were male.

From 1999 to 2001, crime rose from about 17,000 to about 21,000 total victims.

In 1999, there were 12,376 male victims of violent crime and 3,900 female victims of violent crime.
In 2001, there were 15,034 male victims of violent crime and 4,520 female victims of violent crime.

The number of male victims rose by 2,658. The number of female victims rose by 620.

Across 1999–2001, 81% of the increase in victims of crime were male.

_______ ~.::[༒]::.~ _______

Her response? “When you are comparing the % increase in violent crime victims you have to compare the increase in male victims vs. the increase in female victims. So if we take your example, victims in 1999 vs 2001, the male victims increased by 21% and the female victims increased by 16%. … we need to look at the % increase relative to the gendered total because you are claiming that if there were an increase in crime, men would be the preponderant victims. For the ratio to go back to the pre-2008 disparity, violent crimes against men would have to increase by by 58% and violent crimes agains women would have to increase by 0%.  No statistic I’ve seen would predict that.”

I answered: “It’s exactly the kind of derived statistic we would get if, say, crime rose by 5000 victims and around 5000 of them were men. That’s not that different from crime rising by 5000 victims and 4400 of them being men, which is exactly what happened in 1985–1990.” To that, she said: “You’re applying the numbers from your 1999–2001 homicide rate increase to the numbers from the 2013 total violent crime stats (over 3 million). If male crime victims increased by 4,400 in 2014 from their 2013 number of 1,567,070, that would be a 0.3% increase in male victims.”

Sounds implausible, right? She’s making it sound like my position is committing me to expect that there’s going to be a 58% increase in male victims—from 1,567,070 to 2,480,000; an increase of 909,000 new male victims—as soon as crime goes back up. But as far as I can tell, it’s an abuse of statistical reasoning. Why? Because it’s just the wrong statistic to look at, plain and simple. There’s absolutely no reason to look at it in the first place, unless you’re scrambling for any way of looking at the data that doesn’t seem to lead to the conclusion I’m presenting.

So I responded to that by drawing an illustration that should make that a little clearer:

“Start with 10 female and 1,000 male victims in a hypothetical year.

Suppose that I’m the “murders per year” monster, and I kill 200 women and 100 men every year.

That’s a trend, and I think we can all agree that that much is obvious. (If we don’t, I quit.)

Alright, so in the first year that’s a 2000% increase in victimized women and a 10% increase in victimized men. Next year, it’s a 95% increase in women and a 9% increase in men. Jump forward three hundred years, and next year we get a 0.33% increase in women and a 0.32% increase in men. Does the change from 2000% and 10% to 95% and 9% to 0.33% and 0.32% look like a trend? No. But we just started out acknowledging that a trend is there: I’m the “murders per year” monster, and I kill 200 women and 100 men every single year. The statistics you get when you compare percentage increases of men and women won’t show you that trend, but the way you measure what I do is by looking at what I doWhen I get angry, I kill more men than women. When I calm down, the gender ratio drops back down, because I’m no longer killing more men than women. That is what the data shows.” 

In the end, she just kept contorting herself to hold on to those secondary numbers as a way to avoid my conclusion with only the faintest hint of any bare thread of actual reasoning left remaining behind her rejection of it: “You can’t compare apples to oranges. Even in looking at your stats, there is a consistency in numbers which invalidates your argument. I’m sorry you can’t see that.”—and at this, she simply stopped respoding to me.

This wasn’t, in general, an unintelligent woman I was talking to.

It takes an ideology to make someone go this contortionist to avoid something this straightforward.

_______ ~.::[༒]::.~ _______

Stepping back again, we can look at the actual historical trends in what actually happened at previous times when crime fell. We looked, already, at what happened when homicide rose between 1985–1990 and beween 1999–2001. So what happened prior to those increases in crime? What happened during the times when crime fell?

In the five years between 1980–1985, crime fell from about 24,300 to about 19,900 total victims.

Thus, in 1980, there were 18,766 total male victims and 5075 total female victims.
And in 1985, there were 14,738 total male victims and 4707 total female victims.

Total male victims fell by 4,028. Total female victims fell by 368.

91% of the decrease in victims of crime between 1980–1985 were male. 

In the ten years between 1990–2000, crime fell from about 24,900 to about 16,800 total victims.

Thus, in 1990, there were 19,128 total male victims and 5169 total female victims.
And in 2000, there were 12,407 total male victims and 3799 total female victims.

Total male victims fell by 6,721. Total female victims felll by 1,370.

83% of the decrease in victims of crime between 1990–2000 were male.

By her reasoning, the male rate fell by 21.5% in 1980–1985 and the female rate dropped by 7.25%. This should have made it unlikely, somehow, for the male rate to have increased by 29.8% and the female rate to have increased by only 9.9% in 1985–1990. Similarly, the male rate fell by 35% in 1990–2000 and the female rate dropped by 26.5%. And this should have made it unlikely, somehow, for the male rate to have increased by 21.5% and the female rate to have increased by only 15.9% in 1999–2001. But that’s exactly what happened, and it’s exactly how the statistic that men are a full 77% of all victims of violent crime remained true across this entire span of time (1980–2008).

That most of the decrease in violent crime impacted men in 1980–1985 (91%) did not stop most of the increase across 1985–1990 from primarily impacting men (90%). Similarly,  that most of the decrease in violent crime impacted men in 1990–2000 (83%) did not stop most  of the increase across 1999–2001 from primarily impacting men (81%) once again. Either way, Ms. Magazine’s spin on the gender of crime remains hypocritical. The article complains that “formal commitment to gender equality in the law has yet to mean that men and women benefit equally from societal improvements like lower crime rates … or enjoy anywhere near parity rights to physical freedom and security. … women’s absorption of the gap’s costs continues to be largely taken for granted.”

We could just as easily state on the basis of this data that “a formal commitment to gender equality in the law has yet to mean that men and women are protected equally from societal disruption when crime rates rise … men’s absorption of the costs when violence grows continues to be taken for granted”. But feminists wouldn’t dare—the emphasis must be on women–as–the–unequivocal–victims and men–as–the–unequivocally–privileged–elite, no matter what violence must be done to the facts to keep them held within the frame of that narrative.

_______ ~.::[༒]::.~ _______

That’s case study #1. I’ll be quoting a wide variety of further sources in the future to continue supporting the point: feminists don’t care half as much about gender disparities unless they can be spun as harming women most—even if the fact behind the spin is that violence is simply dropping more for men, who have historically been by far the most victimized by violent crime in the first place (and women are still benefiting from a decline). This selective focus, combined with wild exaggerations of those disparities where they are found, has left us in relative ignorance about a variety of disparities that do in fact impact men most, and left us with a wildly distorted and unrealistic impression of what the overall balance of gender disparities in the U.S. really looks like. The truth, in my view, lies somewhere inbetween the extremes of both “feminist” and “mens’ rights activist” narratives.

Feminists often attack anyone who chooses not to define themselves as “a feminist” by claiming that (Jezebel): “to identify as a feminist is simply to acknowledge that women are people, and, as such, women deserve the same social, economic, and political rights and opportunities as other styles of people (i.e., men-people). … If you are not a feminist …, then you are a bad person. Those are the only options. You either believe that women are people, or you don’t.” In practice, however, “feminism” is very clearly widely and largely defined by the belief that women unequivocally get the short end of the social stick. No feminist anywhere is going around defining Mens’ Rights Activism (which I might have more to say about in the future) as “the belief that men are people,” saying “If you are not a Mens’ Rights Activist, then you are a bad person. … You either believe that men are people, or you don’t.” It shouldn’t strike anyone as controversial to say that this is because Mens’ Rights Activism is seen as unnecessary—because of the implied belief that men are already unequivocally receiving all possible benefits of sitting comfortably on top of the social totem pole.

If we accept this as the real definition–in–practice of feminism, however, then the empirical accuracy of its actual premises can be called into doubt—and this article, once again, serves as just one in what will become a far–reaching series of case studies supporting the point that they should. We hear nothing in this article about the fact men are historically more than three out of four victims of violent crime. We hear no explanation that rates are becoming more similar simply because crime is falling right now in general—and falling for women, too. The male disparity in crime victimization simply isn’t worth discussing, for the vast majority of feminists, until it becomes (almostequal through falling faster for men (who were a far greater majority of victims to begin with) than for women—at which point we hear that “A formal commitment to gender equality in the law has yet to mean that … women … enjoy anywhere near parity rights to physical freedom and security.” Again, I want to emphasize just how absurd this statement actually is: men have historically been the mass majority of victims of violent crime by far. Now that crime is dropping, the fact that it is dropping faster for men than it is for women—causing the rates to become almost, and yet still not quite, similar—at least  for now—is taken to support the conclusion that women have “nowhere near equal rights to physical security.” The situation is, quite literally, and in plain point of fact, the reverse: men are still not quite equal to the low rate of victimization in violent crime that women experienced, as of 2013—and we have every reason to expect  that if violent crime rises, that ratio will change to the disadvantage of men once again. Men are almost experiencing the low violent crime rates that women are right now, thanks to a current overall drop in violent crime. The fact that men are almost being murdered as infrequently as women are right now—for the first time in recent United States history—is taken by these feminists to support the claim that women are “nowhere near equal.” Men for once being almost equal to women in enjoying a low violent crime rate is used to say that women are “nowhere near” equal to men. This reasoning couldn’t be any more ass–backwards.

If I may have the same right to define my own labels in my own terms as feminists claim for themselves, then I’d like to define choosing not to define as feminist as choosing to try to return some balance to the conversation.

Violence against women is terrible. Violence against men is terrible too. Why not just stop there?


This is what a feminist looks like. 

Consciousness (IX) — Obey Libet? No Way, José.

(Note: I still consider this entry to be in rough draft form. I haven’t exactly been in the clearest state of mind. Though the rest of the essay gets better, I think, rereading the first paragraph made that especially clear to me. I don’t post these on Patreon until they’re in a refined and finished form. However, so long as I’m capable of writing but not capable of editing for clarity, I’d rather write and have sub–par work already done and ready to be refined at some point later than sit and wait until I can pull off essays that feel to me like perfect tens.)

One of the best cases for showing just how deeply bad philosophy can corrupt perfectly reasonable scientific experiment, straining data through the filter of philosophical lenses of interpretation to create something that is now not just raw data, but philosophy generated only partially in reaction to data—and also partially through the lens of conceptual filters which are themselves justified not by data, but by rationalistic considerations about what constraints a satisfactory account of the phenomena in question would need to conform to in principle—and then pretending not to have done so; pretending that the end result of this process is just plain science, refusing to defend the philosophical premises involved in it on the philosophical terms they require, and implying that these concepts therefore have the full weight of authority of a finding of Science per se, is the “science ‘on’ free will.” I write this phrase in scare–quotes because this data, in and of itself, isn’t “on” free will at all—to interpret it as bearing relevance to the question of free will is a philosophical claim about the science, and not some simple and straightforward description of what the data of that scientific investigation itself is plainly doing. I think this will become clear as I proceed through the analysis and spell out the exact details of what I mean.

There is a great deal of science that is claimed to have relevance for the question of free will, from the “readiness potentials” found in experiments from neuroscience to the “situationist effects” found in experiments from social psychology. As I spell out the way these have been argued to undermine belief in the possibility of free will, it will become clear that the crucial steps of interpretation involve philosophical assumptions—and that with different philosophical assumptions we could very well interpret that very same data to significantly different results. The crucial questions, therefore, hinge on which set of philosophical assumptions can be most well–justified—and our understanding of the data will follow from these, rather than the reverse: these philosophical assumptions are not proven or disproven by the data[1], but stand independent from it and come first; these assumptions determine how we will read the data. It only ever appears to be otherwise—it only ever appears that empirical data conclusively proves answers to the philosophical questions that determine how we interpret that data—because people interpret the empirical data through philosophical assumptions which they do not explicitly identify and recognize and then—voilà; quelle surprise!—something comes out of the other end of the process that conforms with those very assumptions. The greatest value and import of philosophy is precisely to draw these implicit assumptions out into our explicit awareness so that we can acknowledge them as the assumptions that they are—recognize that they are not, in fact, the only option—and then evaluate them in actually appropriate terms against the relevant, real alternatives.

 _______ ~.::[༒]::.~ _______

Notice: When I talk about “free will” in this post, I am talking about the robust sort of free will that entails that right up until the moment of my choice, nothing in the previous physical states of the Universe determines what my choice is going to be; and at the moment I make my choice, I alone determine what that decision will beThe idea that this is the sort of “free will” that most of us feel that we have is an empirical question, and it is separate from the question of whether we really have it. The technical term for the range of views which say that this is kind of “free will” that we feel ourselves to have (or want, or should want) is “incompatibilism.” “Compatibilists,” by contrast, argue that the only sense of the term “free will” that we do want, or should want, or perhaps that even means anything at all is the sense in which I make the decision because I want to, and not because someone else has a gun to my head—even if both my decision and my desire were absolutely set in metaphysical stone from the moment of the Big Bang. While compatibilism is a more or less uniform position, incompatibilists are divided between those who believe that we do have the metaphysical kind of free will they believe we intuitively feel ourselves to have (and these are called “libertarians”), and those who believe that we do not (and these are called “hard determinists.”)

I adopt the incompatibilist definition for two reasons: first, because given that it is the most “metaphysical” sense of the concept of “free will,” using it will even more forcefully demonstrate my point that metaphysical assumptions influence how we will interpret empirical findings of science to begin with far more than scientific findings will determine what metaphysical convictions we will come to hold, both in practice (because in practice, we are reading scientific findings through metaphysical assumptions even if we do not realize and acknowledge this at all) and in principle (because it is impossible in principle not to read scientific findings through metaphysical assumptions, and also impossible in principle to actually arrive at metaphysical convictions through scientific data alone without turning that data into actual concepts by filtering it through philosophical assumptions). This point will stand even if you think the idea that this kind of free will actually could in fact be possible is nonsense

Second, I adopt the incompatibilist definition because I think it is empirically correct. This 2010 study by Sarkissian (et al), Is Belief in Free Will a Cultural Universal? “extends previous research by presenting a cross–cultural study examining intuitions about free will and moral responsibility in subjects from the United States, Hong Kong, India and Colombia. The results revealed a striking degree of cross–cultural convergence. In all four cultural groups, the majority of participants said that (a) our universe is indeterministic and (b) moral responsibility is not compatible with determinism”—these findings, Sarkissian (et al) argue, imply “fundamental truth[s] about the way people think about human freedom.”

In the book, Free Will and Consciousness: A Determinist Account of the Illusion of Free Will, “hard determinist” Gregg D. Caruso writes: “ … I maintain that our phenomenology strongly supports an incompatibilist, libertarian, essentially agent–causal conception of free will. … compatibilists cannot simply neglect or dismiss the nature of agentive experience. … our phenomenology is rather definitive. From a first–person point of view, we feel as though we are self–determining agents who are capable of acting counter–causally. … we all experience, as Galen Strawson puts it, a sense of “radical, absolute, buckstopping up–to–me–ness in choice and actions” (2004, 380). …  In addition to experiencing a robust sense of self, we also perceive ourselves to be uncaused causes. When I perform a voluntary act, like reaching out to pick up my coffe mug, I feel as though it is I, myself, that causes the motion. We feel as though we are self–moving beings that are causally undetermined by antecedent events.”

So why does Caruso turn against these libertarian–incompatibilist intuitions? To quote from Jonathan M. S. Pearce’s review, “Caruso characterizes agent–causalism as a theory committed to a dualist picture of the self. And it is this alleged feature of agent–causal strategies that is his main target; he argues that agent–causation involves a violation of physical causal closure (pp. 29-42).” Obviously, it stands to reason that neither Pearce nor Caruso seriously consider the possibility that something like the dualist picture of the self could in fact be true—but we have already seen that this, itself, is a metaphysical question which depends on the truth or falsity of philosophical conceptual claims about the relationship between objectively measurable physical states and subjective states of conscious experience that cannot be established by “data” as such which by definition deals with only one half of this equation. Again summarizing the quotation from William James in the last entry, “In strict science, we can only write down the bare fact of concomitance [between conscious experiences and physical brain states]; and all talk about either production or transmission, as the mode of taking place, is pure superadded hypothesis, and metaphysical hypothesis at that, for we can frame no more notion of the details on the one alternative than on the other. Ask for any indication of the exact process either of transmission or of production, and Science confesses her imagination to be bankrupt.”

But notice that Caruso sees no need to defend the conception he holds of the nature of causal closure, which blithely rules out the possibility of conscious causal efficacy by definition—that this premise is both necessarily true, and necessarily true in a way that rules out the possibility of the dualist picture of the self he claims agent–causal theories depend on is simply taken for granted. And Pearce doesn’t question it either—he simply wonders if Caruso isn’t “sparring with a straw opponent,” because the notion that anyone could actually think with any degree of justification whatsoever that a dualist picture of the self just might be accurate is too absurd to even consider—so Pearce can only wonder if Caruso isn’t making the job too easy by imagining that that’s what his opponent thinks. (I’ll be exploring these arguments from causal closure in more detail later on. Suffice to say for now that I don’t think it’s anywhere near that easy. I stop just short of saying that the principle of causal closure, at least as ordinarily understood, can be downright refuted deductively.)

In any case, the point stands that these are philosophical considerations, and not anything proven by any kind of direct experiment—and notice that Caruso begins addressing them on page 15, well before the first mention of the usual introduction to scientific studies on the topic—the experiments conducted by Benjamin Libet in the 1980’s—begin to appear somewhere past page 100. The account is one which the advocate of agent causation cannot put forward “wholly without embarrassment” because it would “require giving up … one of the core principles of atomistic physicalism”: namely, that all the properties of any given thing are nothing more than the sum of its parts—and the properties which atomistic physicalism supposes the basic “parts” of reality to be composed of are all presumed by definition to be utterly mindless: lacking in intentionality, lacking experientiality, acting blindly as a passive result of inert ‘causes’ rather than ever for positively adopted ‘reasons,’ and so forth. That questioning this premise is deemed to be “embarrassing” for the advocate of agent causation, while no “embarrassment” is supposed to come for the physicalist from the complete and absolute absence of any working account of how anyone might even begin to try to derive those properties from ingredients supposed to be wholly lacking in them (see essays I–V in this series), is less a compelling argument than it is a striking illustration of intellectual fashion’s double standards. The non–physicalist is supposed to feel like a white daughter of the 1950’s confessing to her parents that she thinks she wants to date a black man: “Why, you just ought to feel ashamed of yourself!”—embarrassed for even having had the thought.

A final note: It is not my purpose here to positively “prove” that metaphysical free will does, in fact, exist—that would be too tall an order for an essay even several times this length; not least because even if consciousness is an irreducible phenomena in its own right, as I argue, this still leaves open the hypothetical possibility that consciousness could be “determined” according to its own unique kinds of rules. The question I’m concerned with here is more simply: if you do think your internal experience presents itself as possessing metaphysically free capacities for self–determining choice (and even if you don’t, that simply doesn’t change the fact that a large percentage of people very obviously do), does “science” give you overriding reasons to conclude that this sensation is nothing more than an illusion? If you really don’t share that sensation, or find any interest in the idea that the “scientific” facts may in fact still leave abundant room for the empirical possibility of it, then you can still derive value from this essay by reading them in light of the following question: “If we really should throw out the ordinary concept of free will, should that be for philosophical and conceptual reasons, or because of specific facts which rule out that possibility that have been demonstrated empirically true by ‘science’ without any involvement from or filtering through philosophical interpretations at all?” Either way, my argument is that—contrary to claims popular in some corners—“science” actually demonstrates strikingly little of direct relevance to these questions, either by “empirically” proving the inefficacy or epiphenomenal nature of conscious intention specifically or (as has been argued across the several entries preceding this one) by “empirically” proving the truth of the claim that subjective conscious intentionality and experience is either an epiphenomena of or just “identical to” the brain qua physical object in general: as William James wrote, this is as much a metaphysical hypothesis superimposed upon the mere empirical finding of “concomitance” as any other.

 _______ ~.::[༒]::.~ _______

If asked to describe our own internal experiences of our own “free will,” I think most of us would agree that our phenomenal experiences themselves indicate a difference between on the one hand our choices, and on the other hand our urges. Even those of us who feel as if we truly do have the power to ‘really’ choose between alternative decisions at the very moment of choice very clearly do not feel that we choose the moment when an urge submits itself to our conscious awareness and forces us to make a decision about it. I may believe that I have the power—at any given moment—regardless of the preceding physical states of myself and the world around me—to choose whether to continue writing this post, or to stop and take a break—without believing in any way that I control the moment at which the urge to get up and do something else will become present within my experience. If exploring the phenomenology of free will (e.g., how it ‘appears’ from the ‘inside’) is our goal, then clarifying the distinction between decisions and urges is one of the most rudimentary opening clarifications we ought to make right in the very beginning. And if you want to say that the way people experience the sense of possessing free will is inaccurate, then you have to get what it is that they feel that they experience right first. 

But this is a distinction that can only be made “from the inside.” Scientific investigation can identify physiological correlates with sensations of urge or the feeling of making a decision, but it can’t have any idea at all what sensation it is identifying physiological correlates withunless it relies on trying to correlate that physiology with a subject’s reports about their own internal subjective states. Without the intrinsic involvement of a subject doing their best to report what their own conscious experiences ‘feel like,’ scientific investigation of brain physiology can’t know what part of an individual’s concept of consciousness it is identifying aspects of. It can identify, for example, that activating a certain part of the visual cortex leads in turn to the activation of the amygdala and then leads to sweating, release of adrenaline, and so forth—but unless it relies on the subjective reports of an individual describing their own internal states of consciousness, it can’t know that “when you turned that thing on, it made me start flashing back to childhood traumas, and visualizing that made me start to panic.”

Without continually referring back to the subjective reports by subjects of their own internal states of consciousness, it would have been a mystery why activating that same location in a second subject instead lead to a release of endorphins—until they said, “when you turned that thing on, it made me start flashing back to how wonderful my childhood was, and visualizing that made me feel comforted”—and from here, we could infer that the region of the brain we were activating was probably linked to childhood experience. Now you may be thinking we could have found that the region of neurons in question which were being activated in the experiment are in some way connected to neurons that grow in either a healthy or deformed way during childhood, without relying on any subjective reports—but we just as easily could go on to find that in an equal number of subjects with childhood trauma, stimulating that part of the brain doesn’t lead to activation of the amygdala at all.And once again, we simply would have no way to form a clue about what was happening unless the subject said, “My childhood was so traumatic that I learned how to just deaden myself and stop feeling emotions at all.”

Again, you might propose that we could have identified historical differences between those subjects with traumatic childhoods in which the amygdala was activated and those with traumatic childhoods in which it was not which correlated with their “learning to stop feeling emotions,” but for any such account you might give, there is always some report that could be given that would potentially undermine that whole account (we’ve just given two examples), and you would simply have no reason to think that this had been the right place to look if you had not had the subjective report itself. Why wouldn’t you have hypothesized that the subjects with traumatic childhoods whose amygdalas did not activate were simply born with less interconnected amygdalas—a physical fact that would have nothing to do with their “response” to the trauma they went through? If you assume that the proper, correct answer would have to be in terms of the purely physical properties of the brain and that the “response” could not be a directly conscious choice of action on the part of the subject, but could only be their way of describing how what their brain, qua physical process, was doing ‘felt like’ after the fact, then you are making the philosophical assumption that epiphenomenalism is true. But there are overwhelmingly good philosophical reasons to think that epiphenomenalism is false—if it were true, then we wouldn’t even be capable of making ‘reports’ about what our conscious states of experience ‘felt like,’ because the brains that produce both our thoughts and capacity for verbal speech would have none of the capacity for causal contact with those epiphenomenal experiences that would be required in principle for it to “know” anything whatsoever about them at all. Likewise, if your answer is that it is merely an epistemological limitation that we have to rely on the reports of subjects because we can’t possibly have all the relevant physical facts, then like it or not, this is an inadvertent confession that the idea that the physical facts (construed, again, in purely mechanistic terms) would be sufficient to explain the phenomena in question is not something you “empirically” know.

Perhaps some day we will gain the ability to reconstruct what someone is visualizing simply by taking a measurement of their brain state and mapping that activity out to convert it directly into an image—but if we ever reach that day, it will only be because we spent a long stretch of time learning what brain states correlate with which aspects of subjectively experienced qualitative imagery by relying on subjects’ subjective reports in order to establish the brute facts about these correlations. This returns us, of course, to the general points established in the core philosophical part of this series in essays (IV) and (V) about qualitative experience and intentionality: physical states, qua physical states, simply are never “about” anything at all. And knowledge of physical causation cannot, in principle, give us direct knowledge about the qualitative state of subjective experience. Simply looking at the brain as a physical object doesn’t give us any reason to think any sort of stream of experience is taking place “inside of” it at all, and if we did not have the example of our own first–hand case to inform us that there appears to be some sort of correlation between subjective states of experience and physical states of brains, we would have no reason to make the inference into assuming any experiences were even happening to start with.

Not only can purely physical information not tell us what someone is experiencing, unless we already have a brute set of correlations between physical states and subjective experiences to go on which was necessarily established either by (1) first–hand knowledge of our own consciousness or (2) second–hand reports from someone else about their states of consciousness to begin with (the “knowledge argument” about “qualia”), but purely physical information cannot tell us what someone is thinking “about,” either (the “knowledge argument” applied to intentionality—see: What is it Like to be  Human (Instead of a Bat)? by Lawrence BonJour, or Bill Vallicella’s summaries of it here in Intentionality Not a ‘Hard Problem’ for Physicalists? and here in BonJour on Intentionality and Materialism). Consciousness is fundamentally and thoroughly composed of both phenomenal experience ‘on the edges’ and intentionalistic thought (which is also phenomenal) ‘at its core’—but physicalism cannot account for so much as the existence of either; and knowledge about the physical details of physical states cannot give us knowledge about the details of either[2], either—not unless we first rely on a subject’s verbal reports about their own internal state, and then simply accept whatever correlations we might happen to find between physical states and second–hand reports of intentional and experiential states as brute facts. But it simply is not clear what it is that finding these correlations establishes—not until we start the incredibly complicated intellectual work of trying very carefully to interpret them—which is fundamentally a philosophical project.

 _______ ~.::[༒]::.~ _______

 The story of “scientific” refutation of the possibility of free will begins in the 1980’s, with the scientific studies conducted by Benjamin Libet. Though now more than three decades old, these experiments still carry a large bulk of the “scientific” analysis of the implausibility of free will. In (New Atheist and practicing neuroscientist) Sam Harris’ 2012 book Free Will, he writes: “The physiologist Benjamin Libet famously used EEG to show that activity in the brain’s motor cortex can be detected some 300 milliseconds before a person feels that he has decided to move. Another lab extended this work using functional magnetic resonance imaging (fMRI): Subjects were asked to press one of two buttons while watching a “clock” composed of a random sequence of letters appearing on the screen. They reported which letter was visible at the moment they decided to press one button or the other. . . . One fact now seems indisputable: Some moments before you are aware of what you will do next—a time in which you subjectively appear to have complete freedom to behave however you please—your brain has already determined what you will do. You then become conscious of this “decision” and believe that you are in the process of making it.”

Daniel Wegner is one of the most prominent social psychologists known for his continuation of experiments aiming to prove this general sort of idea. In his discussion of Libet’s experiments in the 2002 The Illusion of Conscious Will, he explains the picture of mind that he still believes the Libet experiments prove: “Does the compass steer the ship? … [not] in any physical sense. The needle is just gliding around in the compass housing, doing no actual steering at all. It is thus tempting to relegate the little magnetic pointer to the class of epiphenomena — things that don’t really matter in determining where the ship will go. Conscious will is the mind’s compass.” In other words, preceding unconscious brain events are the cause of both our future behaviors, and our later, illusory experience of making the “choice” of those behaviors over others—it isn’t even that our experiences of choice are determined; they’re also completely superfluous to the chain of events that even lead to the actual action which we associate with the experience of choice.

How well–justified is this claim? A variety of criticisms could and have been leveled at these experiments, but I want to focus on just one of them—one I consider most obviously decisive and fatal. I argued earlier that one of the most rudimentary distinctions we should make if we want to think seriously about the way ‘free will’ seems is between urges and decisions. The set–up of the Libet experiments looses this distinction completely.

To repeat the explanation in my words, the Libet–type experiments first have a subject sit down in front of a clock, while hooked up to an EEG (or fMRI). Then, they explicitly instruct that subject to perform some simple motor activity at random—absolutely nothing is at stake in the decision; there is no goal to achieve, there are no values or variables to weigh or choose between, and no number of button presses or wrist–flicks is too high or too low. There is no way to “win,” there is no way to “fail,” and there are no alternative outcomes in the experiment for the subject to pick between. With absolutely no goals or constraints, subjects in these experiments are told to sit back and perform a perfectly purposeless motion at random for which they have absolutely no reason in principle to choose one moment over another.

Stop right there.

What do you expect might happen if you were to do that? Imagine being told to spend several minutes flipping your hand back and forth from palm–up to palm–down at random. What do you think that might feel like? For just a moment, allow yourself to pause and imagine it—or even perform the experiment (presumably minus electrodes)—before continuing.

The shocking, startling discovery these experiments find is that when test subjects tell the conductors where the clock was when they “decided” to move, there was a type of activity they designated the “readiness potential” in the brain already detectably building in the milliseconds leading up to the moment they became aware of having made the “decision” to move.

Quite simply, without even touching any of the many other angles of critique that these arguments face at all: Why should anyone interpret this as the subconscious generation of the decision itself to begin with? The answer is a plain and straightforward: they shouldn’t. What does it feel like when you set the general intention to perform a purposeless movement at random? It feels like setting the intention to sit back and allow the urge to move to randomly appear, and then waiting for it—and then moving when it occurs. Does that not in fact feel exactly like waiting for a physical “urge” to appear before acting on it? So why should the fact that a kind of brain activity precedes the decision in this very peculiar kind of case lead us in any way to even suspect on this basis that decisions in general are determined by preceding unconscious brain activity?

If anything, even those of us who think that the nature of first–hand experience offers prima facie justification for the belief that we might have the metaphysical kind of free will, sheer introspection alone should have led us to expect something like this: when we ride a bike, or type a sentence on a keyboard, or learn to play the guitar, it feels one way to initially learn how to perform the action, and it feels another way to perform once having learned. When I am learning to play the guitar, it feels as though I have to consciously deliberate each distinct individual action of placing my middle finger on the third fret of the bottom E string, my index finger on the second fret of the A string, and my ring finger on the third fret of the top E string; and then to move my middle finger to the second fret of the G string, my index finger to the second fret of the top E string, and my ring finger to the third fret of the B string; and then to move my middle finger to the second fret of the A string and my ring finger to the second fret of the D string. (Tedious!) With a little more practice, this begins to feel like “setting the intention to strum a G chord,” and then “setting the intention to strum a D chord,” and then “setting the intention to strum an Em chord” and allowing my hands to automatically fill in the rest—and with a little more practice, it simply feels like “setting the intention to play Freebird.” The more, in other words, that I consciously practice these motions, the less it feels as though I need to consciously deliberate each individual step—the more that the execution of the action becomes “automatic.” If neuroscience were to find that conscious deliberation plays no role the motion from G chord to D chord to Em when Gary Rossington plays the chords to Freebird, would that undercut anyone’s ordinary concept of free will in any way at all? I think not. And we shouldn’t look at cases where people are explicitly asked to sit passively and respond to random urges any differently—setting the general intention “to play Freebird” in advance and then allowing the details to physically carry through is no different in essence from setting the general intention “to flip my arm over at random, whenever I feel the urge” and then allowing those details to continue to physically carry through. I’ve already done the important intention setting at the point at which I decided either to begin playing Freebird, or chosen to walk inside Libet’s lab and passively follow his instructions. Anything that follows next is just quite simply categorically different—even at the most basic, subjective phenomenal level—from the kind of decision I made, at the beginning of either of these processes, to begin them.

There are, again, several other lines of critique that could be taken against drawing the determinist implications from Libet experiments—not least amongst them is that Libet himself went on to argue that we do in fact have the capacity to either allow the readiness potential to go through or to “veto” it even within the context of his own peculiar kinds of experiments (which in and of itself sounds like a confirmation of the fact that the “readiness potential” does not measure the decision itself but only a physical urge which is later decided upon). But I neither want nor need to explore that issue in the detail required here: anything further than this is simply an attempt to starve a dead horse to death. No matter what might hold true about the details of Libet–type experiments, there is just quite simply no reason nor justification for generalizing what happens when people are explicitly told to sit back and allow themselves to passively act on urges at random to what we should expect to hold true in any other sort of conscious state where decision plays a more active role at all—even if the situation is as, or more, dire in those conditions than Libet thought. The fact that a point this basic was so deeply missed by a study still hailed today as one of the most powerful pieces of “evidence” for the “scientific” impossibility of free will should give you some solid impressions about the quality of reasoning we’re dealing with when “science” is claimed to prove answers to philosophical questions about the nature of mind.

In conjunction with his claim about Benjamin Libet, you may have noticed that Sam Harris immediately followed up with a statement about “another lab [that] extended this work using functional magnetic resonance imaging (fMRI)….” The lab he refers to is Chun Siong Soon’s[3], and the summary of the 2008 study published in Nature Neuroscience can be seen here. While the activity measured in this study was still, as before, purposeless, with no goals or constrains, it did change one substantial thing. According to the way Soon (et al.) summarized their own research—in a summary paper titled “Unconscious Determinants of Free Decisions in the Brain”—“There has been a long controversy as to whether subjectively ‘free’ decisions are determined by brain activity ahead of time. We found that the outcome of a decision can be encoded in brain activity of prefrontal and parietal cortex up to 10 s before it enters awareness.” The actual point this new study was supposed to add to the already–existent debate was that it was supposed to establish the capacity of these scientific measurements to predict not just the general timing of a single choice, but now in fact which of two—count them, two—equally meaningless choices the subject would choose between. And the conclusions we are supposed to draw from this are, again, wide–reaching—returning to the summary from Harris: “One fact now seems indisputable: Some moments before you are aware of what you will do next—a time in which you subjectively appear to have complete freedom to behave however you please—your brain has already determined what you will do. You (only) then become conscious of this “decision” and believe (falsely) that you (“you”) are in the process of making it.”

What do the particular new facts drawn by this study add to the picture? There is one thing that neither Harris’ reference to this study, nor Soon (et al.)’s own summary of it in Nature Neuroscience, will clearly tell you—quoting Alfred Mele: “ … the predictions are accurate only 60 percent of the time. Using a coin, I can predict with 50–percent accuracy which button a participant will press next. And if the person agrees not to press a button for a minute (or an hour), I can make my predictions a minute (or an hour) in advance. I come out 10 points worse in accuracy, but I win big in terms of time. So what is indicated by the neural activity that Soon and colleagues measured? My money is on a slight unconscious bias toward a particular button—a bias that may give the participant about a 60–percent chance of pressing that button next.”

Notably, this 60–percent figure is a drop from a predictive value of 80–90% in cases where the moment chosen to commit a single predefined action—such as Libet’s wrist–rotating—is what is being predicted in the study. Even with the increased understanding of neurophysiology developed over the past handful of decades, and even with refined neuroimaging techniques, the predictive power of the “readiness potential” in this study still immediately drops by 20%—down to fairly little over chance—with even a slight shift of the design of the experiment towards something that comes even marginally closer to resembling the kinds of decisions in which we actually deliberate—and feel as if we deliberate freely—over a choice.

But yet again, even if the predictive value of the “readiness potential” in these expanded cases were 100%, why should even that have concerned me? When I go into Soon’s laboratory, I am walking in deliberately setting the conscious intention in advance to sit back and think about nothing other than letting myself push either one or the other button at random—and absolutely nothing weighs on the decision; I am by definition putting myself in the peculiar conscious state of waiting to act on a random urge. Even with this meaningless “choice” between two absolutely meaningless options added to the scenario, it doesn’t even feel like the kind of deliberation that subjectively presents itself as containing the power to do otherwise. I am setting the conscious intention to sit back and act randomly on one or the other urge—which, no less, it even feels as though I am willing the generation of in the first place—in other words, the act of choice that actually seems to present itself as feeling as if it possesses the metaphysical kind of freedom seems to be my decision to activate a program that says something to my body like: “you, allow meaningless urges to generate at random”, and to my brain something like: “and you, be prepared to act on them after they appear.”

The skeptic might scoff at this description of how I think things ‘seem’ ‘from the inside,’ and ridicule my ‘assumption’ that how things ‘feel’ could possibly be any indication of how reality actually is at all. But if so, this would just exactly illustrate the very central fallacy I’ve accused him of: the use of experiments like those conducted by Libet to argue that the phenomonology of decision–making is illusory has been “scientifically” proven simply does not treat that phenomenology seriously in the first place—and given how poorly, unfairly, and inaccurately it does so, we in turn are left with no reason to take the claim seriously that it proves that that phenomenology is delusional.

 _______ ~.::[༒]::.~ _______

A future post will address the conceptual issues involved in the coherency of the basic notion of metaphysical free will, as well as the question of what the practical implications of reaching either conclusion might be, and add some more details on the “science” of free will beyond Libet by discussing The Illusion of Conscious Will from Daniel Wegner. But the general point involved in both the conceptual and the “scientific” rejection of free will is quite the same: its plausibility stands or falls with the fall or stand of the premise of “atomistic physicalism.” I have argued throughout this series that not only do we have not half as much epistemic justification for believing “atomistic physicalism” to be true as is often implied—“science” most certainly has not empirically proven it; a set of philosophical assumptions through which certain findings of science are interpreted, or through which it is assumed that if a principle performs a useful methodological role in scientific investigation, then we can be guaranteed that it represents a universal metaphysical principle that holds inviolable in all places and times are in fact what are doing all the work—but in fact that there are systematic reasons for thinking any such account must necessarily fail in principle, in just the same way that attempts to draw three–dimensional figures on flat two–dimensional canvases will no matter how creative “empirical” attempts to accomplish it might ever even conceivably become.

It is not so much that neuroscientific experiments like these convince people on the sheer basis of the data they’ve collected alone that metaphysical free will is an empirical impossibility. It is much rather that they hold the philosophical view from the outset that it can’t be possible, because the mind just can’t be like that—and with this premise held in place, it is simply a matter of filling in the details about how choice is determined—but this underlying view is in turn far from proven by “science” either—recall the words of William James: “If we are talking of science positively understood, function can mean nothing more than bare concomitant variation. When the brain–activities change in one way, consciousness changes in another; when the currents pour through the occipital lobes, consciousness sees things; when through the lower frontal region, consciousness says things to itself; when they stop, she goes to sleep, etc. In strict science, we can only write down the bare fact of concomitance; and all talk about either production or transmission, as the mode of taking place, is pure superadded hypothesis, and metaphysical hypothesis at that, for we can frame no more notion of the details on the one alternative than on the other.”

The idea that the possibility of metaphysical free will is ruled out by “science” ultimately rests on interpretation of that scientific data which is drawn from the assumption being held from the outset that the conception of the nature of consciousness required to ground it is ruled out by “science”—but this, too, quite simply rests on extrapolations from what “science,” properly understood, actually informs us is so which are performed with the crucial aid of philosophical assumptions which we have every right to subject to philosophical criticism. Given that it cannot actually be demonstrated—the only kind of evidence that does, or could, exist for it directly is in principle subjective, and a skeptical hypothesis where for example the actions of consciousness may simply be determined by a unique and inscrutable set of non–physical laws can be made—metaphysical free will is not the place I would choose to stake the debate against physicalism.

However, in a cumulative case, this series has argued that physicalism cannot account for any aspect of what we actually are, because it cannot account for any aspect of the conscious experiences that we both exist within and infer the very existence of a physical universe through, in principle—consciousness is essentially composed of qualitative experiences through–and–through, with intentionalistic states of conceptual thought “about” that world of experiences in its center—and an extremely important fact about these states that is revealed about them from the direct data available immediately within them is that they come in diachronically unified, fluid and unbroken streams—our experiences, quite simply, flow, and our experiences themselves reveal this to us directly as an immediate piece of data about them—and it is in this self–evident experiential unity across time that our personal identities are found.

Together, these three aspects comprise everything that we are: temporally unified streams of subjective, qualitative experience engaging in representational, conceptual thought “about” the qualitative world we feel, taste, and qualitatively experience around us—and if physicalism supposes by assumption that atomistic forces lacking which have no properties other than those responsible for predisposing them towards various inert patterns of blind motion throughout space, then it defines the world in a way that renders it incapable in principle of accounting either for these qualitative nature of these experiences, the intentionalistic nature of these thoughts, or the temporal unity of the fluid stream composed of both.

Yet, the image which physicalism presents to us of the ultimate nature of the world—with the properties which physicalism attributes to the physical world and restricts the physical world to possessing are—is in fact an idea which we formulated as an intentionalistic, representational concept purely to explain certain aspects of the nature of these very qualitative experiences. Physicalism, in postulating that blind and inert physical processes are the sole bedrock ingredient making up reality, cuts itself off in principle from any capacity whatsoever to explain the existence of the very phenomena which ever caused any of us to have the intentionalistic thought to posit the conceptual idea that blind and inert physical processes lying somewhere inscrutably behind our qualitative experiences even exist in the first place. If our subjective, first–hand conscious experiences also indicate that consciousness is capable of making active choices between truly metaphysically open and real alternatives, and the notion that the world is built out of nothing other than blind and inert physical processes invalidates this possibility, then this is simply additional circumstantial evidence that physicalism ultimately eliminates everything that makes us what we are and grounds what most of us care about. In any case, the crux will follow from where you come down on these philosophical preceding questions; and once you have those settled, “science” actually adds surprisingly little—in fact, practically nothing—into the picture, in striking contrast to the extravagance of many popular claims.

 _______ ~.::[༒]::.~ _______

[1] Alfred Mele thinks that a particular, very strong kind of study the likes of which has not in fact been conducted yet could potentially rule out the possibility of metaphysical free will, by proving that decisions are in fact 100% predictable well in advance before the individual is consciously aware of his decision, and he explains in comments that this is why he asked that his publisher change the title of his 2014 book Free from “Why Science Can’t Disprove Free Will” to “Why Science Hasn’t Disproven Free Will.” There is a certain kind of truth to this, and I think Mele chose the better rhetorical route, as well. But notice: even if actions are 100% predictable from preceding causes, this still simply does not rule out the possibility that it is freely activated conscious intentions which in the end finalize the decision to act—for it could still simply be that peoples’ free decisions are easy to guess when you understand the matrix of conditions within which they are choosing between the options facing them. Put it this way: if we can predict that 100% of people who are starving and given the option to eat or starve to death are going to choose the latter, or that 100% who are given a choice between fillet mignon and rotting roadkill possum will choose the former, our ability to predict their decision still does not prove that the decision they made was not made with a metaphysical capacity to have chosen otherwise. If it turns out that we could measure the urges forming in a person’s brain before these percolate up into conscious awareness and from these predict what urges people are going to act on, this may be different in degree but not in kind from situations like those just mentioned. Impressions to the contrary result from the fact that determinism entails that events should be predictable in principle; the fall of a row of dominos is determined, and therefore it is possible in principle for me to predict in advance exactly when and how the seventh domino is going to fall after I push the first one with a given velocity and angle of physical force. But this does not mean that determinism equals predictability in principle [PIP]. PIP is true if determinism is true, but PIP is not true if and only if determinism is true—in other words, even though PIP would follow from determinism is being true, the truth of PIP would not entail that determinism is true, because determinism is not the only way to get predictability.

This is analogous to saying: If (claim A) someone broke into my home and stole my keys last night, then (claim B) this morning my keys would not be where I thought I left them on the table yesterday, and (claim B”) my keys are not where I thought I left them on the table yesterday, therefore (claim A”) someone broke into my home and stole my keys last night. The step from claim B” to claim A” is fallacious, even though the step from claim A to claim B is valid, because claim A is not the only way that claim B could be true. Claim B could also be true if, for example, I did not in fact put my keys where I thought I put them on the table yesterday, and in fact I put them somewhere else and have misremembered.

To return to the case, it could simply be that people so rarely choose to act contrary to their impulses that prediction of behavior is almost always possible by measuring the impulses gathering in formation subconsciously anyway. This might make the possibility of the existence of metaphysical free will ‘trivial’ for practical intents and purposes, yes—but it still simply would not settle the metaphysical question itself, any more than an empirical limitation to our ability to try to predict future behavior (say, because there are certain levels of activity in the brain which we just can’t accurately scientifically measure) would settle the metaphysical question in favor of the existence of free will (although it might similarly make the possibility of the truth of determinism ‘trivial’ for practical intents and purposes, as well). Empirical data can only settle the question of how predictable behavior is—but either way that that question is answered, the answer simply doesn’t settle the question of why it is or isn’t predictable, and whether any part of the true full explanation of that answer does (or could) involve metaphysical free will.

I think Mele chose the right rhetorical path, because saying that science “can’t” disprove free will might sound superficially to an uninitiated audience like a retreat in reaction to the fact that science seems to have disproven free will insofar as empirical evidence could possibly make the potential existence of free will look implausible, and Mele’s choice of title shifts emphasis onto the fact that the scientific evidence hasn’t even come close to properly doing this yet. But even in the most severe cases, the metaphysical question of free will still simply stands logically independent of anything empirical evidence is capable of revealing about the matter one way or another—even where in the most severe cases certain empirical answers could make certain metaphysical answers either “look implausible” or just seem irrelevant for the particular practical intents and purposes we happen to be most concerned about. The question may get less and less interesting as some reasons for considering it interesting would progressively vanish the closer empirical evidence came to confirming these kinds of claims, but even when we go all the way to a hypothetical extreme which evidence has come nowhere close to confirming yet, and the question becomes the least interesting it can get, this still doesn’t alter the fundamental logical independence of philosophical interpretive lenses from the empirical data so filtered. The philosophical interpretive lenses still determine how that empirical data is filtered, and still cannot be strictly determined by the empirical data itself.

[2] I don’t intend to endorse knowledge arguments as stand–alone arguments here. I explained in a recent entry why I strategically avoided framing my arguments against the physical reducibility of qualitative subjective consciousness in terms of the “zombie argument.” I think knowledge arguments face similar strategic issues—so even though I think they go through, for the most part I think they go through because I accept these other supporting arguments. In other words, I think the points argued by these arguments are sound—it’s just that, dialectically, they’re not “where it’s at.” I add this footnote because I don’t want anyone to get the impression that they’re in and of themselves my reason for thinking these points hold. However, because I endorse all the arguments I have made up to here, I think the truth of these arguments follows.

[3] In the Harris excerpt I read, a mention of the Soon studies followed the break after this paragraph. He may have been referring to the studies of Haggard and Eimer in this part which preceded the break, but in any case, Soon’s is one of the most recent modern “replications” of this kind of finding.

_______ ~.::[༒]::.~ _______

Further Reading:

Free will debates: Simple experiments are not so simple (

A (Philosophical) Zombie Survival Guide

(Note: this entry is still in rough draft form.)

In my own essay IV where I argued my position on why subjective experience can’t be given any physicalist analysis in principle, I avoided any mention of “zombie arguments” by name completely. There are enough confusions about what the purposes of zombie arguments are, and how it is that they’re supposed to achieve them that I thought I could make my own points—which are in the end the same points that the “zombie arguments” ultimately get at—more efficiently simply without even invoking them and then setting up the task for myself of cleaning out all of their baggage. As a supplementary note for any interested readers, however, I’d like to do so in this separate footnote. In part, this will be a clarification of a few things that “zombie arguments” are and are not supposed to do; in part, it will be a clarification of where exactly I see them fitting into my own argument.

To begin, I want to offer a quick overview of the major misconceptions. For now, you’ll just have to take my word for it that they are, in fact, misconceived—the reasons why will become clearer and clearer as my explanation proceeds into what they actually try to do; what I think they can do much more effectively if we make certain small tweaks to the way the argument is presented that can get us around some massively irritating, excessively abstract technical quagmires that the argument often ends up in; and as I explain the arguments which I think get down to the root of what the “zombie arguments” actually want to say without those problems. In short, I think that the zombie arguments are sound—but I think they are a terrible strategic choice for making the conclusion they aim to prove “make sense” to the person whose position they reject. The argument has become hopelessly abstract, muddled by sub–arguments about things like the relationship between logical and metaphysical “possibility”—things I’m willing to bet that no one on Earth actually wants to talk about, and things I’d wager probably no one—I’d wager not even a philosopher—can actually reach any clearly understood conclusions in their mind by thinking about. But I think the argument doesn’t need to be this abstract. With only a few minor tweaks and clarifications, the issue can very easily become a lot more straightforward—and easier to understand—while still doing everything that the standard zombie argument ordinarily tries to achieve.

If conceptual clarity—turning “knowledge” into “understanding,” in P. M. S. Hacker’s words—is the goal of philosophy, then philosophers have been failing their job atrociously. Philosophy does, of course, require the invention of new and sometimes complex terminology in order to clarify the language we use to talk about concepts; ordinary language is often muddled and imprecise, and it is sometimes the needed task of philosophy to recalibrate it with clearer distinctions than everyday language ordinarily has—but it should aim for as much simplicity and clarity as possible except where abstract terminology is actually needed to make a point. If philosophy can’t learn to communicate clearly with non–philosophers, then the fault is not all on non–philosophical disciplines or the laity if the latter end up ignoring philosophy’s insights. And in ordinary formulations of the zombie argument, technical complications—much like the walking dead themselves—well outlive their actual usefulness.

_______ ~.::[༒]::.~ _______

⌦ One of the most common misconceptions is that these arguments somehow either imply, try to prove, or rely on the assumption that zombies should actually be possible—taking the word “possible” to mean “actually possible in our particular world, under the peculiar laws of our particular world, whatever they are.” Not only is this misconception widely prevalent in countless “amateur” discussions I’ve seen firsthand (someone recently told me: “You also take as granted that zombies could exist, and behave exactly as if they were sentient, without being sentient. We have no experimental data to back this. …  I don’t think that zombies can exist. But … that is an experimental issue.”), it also exists in the literature.

⌦ The most significant complication that results from the way the argument is formulated is that, at least as Chalmers presents it, it requires justifying a premise which Chalmers defines as so: “If it is conceivable that there be zombies, it is metaphysically possible that there be zombies.” This, in turn, invites room for a complicated debate over the relationship between prima facie conceivability, logical conceivability, logical possibility, metaphysically possibility, and any number of related concepts that may or may not collapse into others that muddles the entire debate in hopelessly arcane abstraction. I doubt that anyone—even a philosopher—can reach clear conclusions in their mind by reasoning through concepts like these, and I’m equally confident that no one actually holds any intrinsic interest in that sort of conversation in the first place—but I think the argument can be only slightly reformulated in a very simple way that entirely avoids any need for any premise of this sort. 

⌦ One unnecessary complication that follows from the way that the argument is formulated is that it leads to the assumption that we have to imagine the possibility of zombies that behave exactly identical to the human beings that exist in our world. A corollary that follows from this confusion is the deeper confusion that the zombie argument could only succeed at establishing its point if epiphenomenalism were true—that is, the critic reasons that we could coherently imagine “zombies” who behave exactly as we do without having conscious experiences or intentions in the way required for the argument to go through only if conscious intention and experience played no role in our behavior.

⌦ Finally, the most basic objection is that the argument merely begs the question, because if physicalism is true, then zombies aren’t ‘metaphysically possible’ after all. I want to set the tone for that discussion with a quote from Dmitry Sepety: “Typically, begging the question is described as an “informal fallacy where the conclusion that one is attempting to prove is included in the initial premise of an argument, often in an indirect way that conceals this fact”. However, such a definition creates the problem of interpretation: how are we to understand the statement what in an argument, the conclusion is included in the initial premise in an indirect way that conceals this fact? The problem arises because in a sense, any valid argument (an argument where the conclusion logically follows from the premises) contains the conclusion in the conjunction of its premises; otherwise, it would be impossible to (validly) draw the conclusion. Logic is not a hocus-pocus: you cannot draw a “rabbit” from a “logical box”, if it is not there. That is why some logicians say that all valid arguments beg the question. However, if this is the case, then, obviously, begging the question cannot be a fallacy. On the other hand, if begging the question is a fallacy, then for an argument to beg the question, it is not sufficient that its conclusion is contained in the conjunction of its premises—some further conditions are needed. What are these conditions?”

_______ ~.::[༒]::.~ _______

To start with, arguments from the conceivability of zombies have nothing to do with any notion that zombies should actually be possible in our actual world, given whatever particular laws or kinds of entities it actually has. This particular kind of “possibility” is what Chalmers refers to as either “natural possibility” or “nomological possibility” (the term “nomological” refers to the laws of nature). In Consciousness and Its Place In NatureChalmers plainly states: “Zombies are probably not naturally possible: they probably cannot exist in our world, with its laws of nature.” It couldn’t get any more clear than that.

“But the argument holds that zombies could have existed”—and pay very close attention to the phrase he follows this up with: “perhaps in a very different sort of universe.” The argument does not make any assumptions either about what kind of universe we are actually in, or about what kind of properties the stuff we think of as “physical” could perhaps surprisingly turn out to possess. It just says we can imagine worlds with physical properties like ours without being thereby logically compelled to assume that the entities that exist in that world have conscious experiences. And we most certainly can do this—even if it should turn out that all we’re really doing when we do this is abstracting away from the material entities our world actually has whatever properties it is in virtue of which they actually do end up producing consciousness.

In contrast, consider the relationship between the macro–behavior of water and the micro–behavior of molecules of H2O: you can’t conceive of a world that has H2O that behaves like the H2O in our world without thereby imagining a world that has water that behaves like the water in our world. If you imagine a world that has H2O that behaves exactly like the H2O in our world, you are logically compelled to imagine that the water in that world behaves exactly like the water in our world—because the fact that molecules of H2O form loose bonds (for example) just is demonstrably identical to the fact that solid objects ‘sink’ when placed in water—by slipping through the gaps in those molecular bonds. To spell this out in the most simplified possible terms, I can bring you with me to a giant chalkboard and draw a giant close–up view of the molecular interactions between molecules of H2O and molecules of some other substance slipping through the gaps between them. Then, I can bring you some twenty yards or so back away from the chalkboard, and you will see that what you have just drawn is literally “something sinking in water”—and you will see that there can be no other way.

On the other hand, I can draw all the neuronal interactions defined in external, objective physical terms I like, and no matter how detailed that drawing gets, there is no distance I can stand from that drawing at which I will suddenly see a subjective, privately experienced qualitative representation of something other than those interactions themselves “inside of” those interactions. No one has even the faintest hint of a clue how to so much as even begin hypothesizing about how such a thing might be the result of inert causal interactions of any kind whatsoever—not the slightest pitiful speck of progress has been made on that question since Leibniz posed it in 1714 when he wrote that: “ … perception, and that which depends on it, are inexplicable by mechanical causes, that is, by figures and motions, And, supposing that there were a mechanism so constructed as to think, feel and have perception, we might enter it as into a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception.”

As I wrote in my essay (IV) — The Case of the Lunatic Fish, “We aren’t merely failing to see how an explanation from tools like these could be possible; we can positively see that an explanation of a phenomena like this with tools like this cannot be possible—in just the same way that we can see that a two–dimensional canvas is not capable in principle of allowing us to draw a three–dimensional object on its flat surface. Picture all the blind physical entities you like moving in any inert causal pattern you wish—at no point are you just going to be literally looking at a subjective private conscious experience. You don’t have to sit and contemplate the entire near–infinite combination of ways to picture blind physical forces moving through space in order to see why” — any more than you need to spend centuries drawing lines every which conceivable way on the flat canvas to justify the determined conclusion that getting any three–dimensional figure onto its surface is going to be, in principle, utterly impossible — “It’s right there contained within the very concepts themselves.”

_______ ~.::[༒]::.~ _______

Notice, again, that Chalmers is not presupposing that the world is necessarily purely mechanistic in this way. What the argument aims to establish is simply that if it were purely mechanistic, then consciousness as we know it could not have appeared. Perhaps you might think this is true, but trivial—because no real materialist actually thinks the world is just made up of blind forces exerting causal influence over each other and nothing else.

Great! Chalmers doesn’t have to disagree, and the logic of the argument doesn’t imply that he should. The argument is not meant to settle the entire investigation; it merely aims to establish a sensible starting point: “If the world was like that, then consciousness as we know it could not exist. Alright, so what does the world have to be like, then? What—upon adding it in—would alter that picture in such a way as to render it intelligible that consciousness does in fact appear?”

For his part, Chalmers goes on to consider panpsychism—in other words, the possibility that the entire physical world might be full of experience all the way down to its most fundamental core—as a possible solution; and his ultimate answer is to suggest that we must simply posit a brute set of “psycho–physical” laws determining what experiences are had in our world alongside the ordinary “physical” laws determining what causal events take place. Both of these proposals work exactly by trying to find some way to suppose that the actual natural world we live in is not as the zombie argument itself has us imagine it might conceivably have been!

I happen to think that both of these solutions also fail in principle (panpsychism for reasons I explained in entry (VII) and Chalmers’ psychophysical law posit for reasons I haven’t discussed yet) and that the only plausible answer turns out to be that the thing we need to add to the picture is just simply consciousness itself—consciousness conceived as a basic phenomena in the same sense in which, say, the electromagnetic force is considered to be “basic” (see entry (III)) with a unique, basic defining nature of its own (represented in such concepts as subjectivity; intentionality—see entry (V); primitive identity—see entry (VI)—etc.) But either way, the point here is that as soon as Chalmers, I, or anyone else turns to consideration of these possible solutions or any others, we are moving beyond the zombie argument itself. 

So the idea that zombie arguments beg the question against materialism in the sense that they make assumptions about the internal composition of the material entities in our world and therefore rule out the possibility of the “emergence” of consciousness from “matter” out of hand because of this assumption rests on an absolutely fundamental misconception about the nature of the argument. This is simply not what the argument is actually even trying to do.

The point is not that we can conceive of worlds with material entities which are necessarily in all possible respects exactly identical to ours which do not have conscious entities by consequences, and that this is supposed to prove that nothing about the composition of our own material world as it is could potentially explain consciousness (not, at least, so far as the zombie argument itself goes); Chalmers is always intent to clarify that his only premise is that zombies are conceivable “perhaps in a very different kind of world”—so it is simply false to say that it begs the question because materialism implies that the material entities in our world produce consciousness as a logical consequence of their intrinsic composition. The point is simply that we can conceive of worlds that have only the causal mechanistic properties that the material entities in our worlds have—leaving entirely aside whether or not these causal mechanistic properties are the only kinds of properties that the material entities in our world actually have—without it following as any logically entailed consequence that subjective conscious experiences therefore exist. Even if the material entities in our world do have properties other than the raw causal mechanistic properties of physics, we can conceive of a world where they have only these properties.

So the conclusion properly drawn from this is that it cannot be in virtue of those causal mechanistic properties that subjective conscious experience exists. Perhaps, so far as the zombie argument itself is concerned, there are some entirely different sort of properties which the material entities in our world possess in virtue of which the appearance of subjective conscious experience out of such ingredients can be rendered intelligible. If so, of course, we should be able to specify what they are, or might be—and the debate can move on from there. But the actual premise here does not beg the question against the particular premise by which materialism assumes that consciousness will appear by logical necessity in light of some facts about the material entities in our world. The materialist himself, of course, does not think that causal mechanical properties logically necessitate the appearance of subjective conscious experiences either—and this is given by the fact that he himself thinks that atoms (or whatever micro–physical entities you may wish to substitute) are capable of possessing causal mechanical properties without having subjective conscious experiences.

_______ ~.::[༒]::.~ _______

What would “moving forward from here” have to entail? First, I want to emphasize yet again that the moment I begin to discuss this question, I am moving on from the zombie argument itself. Still, I want to give the outline of my own answer.

To begin with, I think we have to establish the following premise “ … the plainest thing in the world to see is that the question of whether something is an experience or not is absolutely binary: the answer is either “yes” or “no,” and there are absolutely no steps in–between the two. The question of when a pile of sand goes from being a “heap” of sand to becoming a “mountain,” for example, is one that has rough edges: at exactly which point in the process of removing singular grains of sand from a “mountain” has it devolved into a “heap?” At exactly which point in the process of adding singular grains of sand to a “heap” does it become a “mountain?” Reasonable people could disagree, and there is no objective way to determine the answer. Some questions are like this: the question of when a new “species” has evolved has rough edges, and evolution can address the transition from one species to another through the small, gradual steps that are involved without needing to bridge any fundamental gap of absolute difference between an original “species” and a second. But the question of conscious experience is not like this—the difference between something being a subjective experience and something not being a subjective experience is as absolute as absolute can get. There may be various degrees of complexity or sensitivity or detail between experiences, but either something is an experience or it isn’t. There is no middle ground between the two—but this also means there is no ground that can be covered in any gradual steps as a means of bridging the gaps between the two. And there is, therefore, no way to proceed gradually in steps from non–experience to experience.”

Therefore, the only kinds of properties we can even propose as a candidate for attributing to microphysical entities in virtue of which subjective conscious experiences of the kind which compose our existence could coherently be supposed to “emerge”—just turns out to be subjective conscious experience itself, as Thomas Nagel argued in his 1979 article, “Panpsychism.” (Thomas Nagel is far from the only figure to argue for panpsychism, but he was one of the first in modern times to propose this particular type of panpsychism in motivation from these specific kinds of reasons.) I think panpsychism is a respectable attempt to answer these problems—it just turns out to fail for other systematic reasons of its own: namely, by either entailing epiphenomenalism (which can be refuted for its own separate reasons) or else implying its own equally insoluble and incoherent version of the ordinary Hard Problem, depending on the details of how it is formulated. I conclude in that entry, Panpsychism: Panacea, or Flash in the Pan? that “ … panpsychism doesn’t foot the bill. In fact, all it does is create the illusion of doing so by turning the tab upside down so that we might not so easily recognize the numbers that are now upside–down and on top of the tab instead of in ordinary, face–up recognizable form down at the bottom where we expect to see them. …

… We might say that the deep, fundamental conceptual gaps between “physical properties” as we have defined them (“mathematically describable geometric structures and mathematical–functionally describable tendencies towards patterns of spatiotemporal motion”) and the subjective, qualitative, phenomenal, intentionalistic (etc.) aspects of experiential consciousness are rather like the Grand Canyon. If the conceptual gaps are the Grand Canyon, then the intractable problems that appear on the ordinary materialist views which say that everything that makes up the human mind are at root ultimately ‘physical’ are the “jumping across the Grand Canyon from East to West” problem. 

cms-140128-grandcanyon-6a_ee7d809aed208419725ced570f11576bIf panpsychism appears to actually solve any part of the problems of consciousness at all, it merely does so by leaving the Grand Canyon entirely and then returning to the plains to the West. The “jumping across the Grand Canyon from East to West” problem might have been solved by this act of relocation, sure—but now we just have the “jumping across the Grand Canyon from West to East” problem—and it turns out that that is just exactly the same problem. The relocation doesn’t actually even begin to make bridging between the two a whit more plausible or coherent at all—you just have to look East instead of West now in order to see it again…. For my part, I’m going to defend the position that we’re simply dealing with two different kinds of territories. 

_______ ~.::[༒]::.~ _______

So really, the only thing the zombie argument is supposed to prove straightforwardly is that any simplistic “identity theory” between physical and mental properties is false—from there, we’re free to try to find any alternative solution to it that we like. Does the premise that zombies are conceivable “beg the question” against the literal identity theory? 

I’m going to be lazy and just quote further from Dmitry Sepety’s excellent article, The Zombie Argument against Materialism Without the Conceivability–to–Possibility Inference again: “The zombie argument can start directly with the contention that phenomenal zombies are logically possible: there is no a priori contradiction in the idea of a zombie. Let us designate this contention as the Zombie Possibility Thesis.

Why the thesis is charged with begging the question against materialism (the identity theory)? There seems to be no other reason except that if the identity theory is true (that is, if mental states are identical with some brain states), then (it would follow that) the phenomenal zombies are logically impossible. However, it is arguable that this objection puts things on their heads.

Of course, it is really the case that if the identity theory is true, then the Zombie Possibility Thesis should be false―just like if the Zombie Possibility Thesis is true, then the identity theory should be false. The Zombie Possibility Thesis contradicts the identity theory; thus, they cannot both be true, and thus at least one of them is false.

We are to notice that this situation is not specific for the relationship between the Zombie Possibility Thesis and the identity theory―it is the situation that is common for all arguments. How otherwise can you argue against any theory? Any such argument is a contention (which may be a conjunction of simpler contentions) that we think to be true and that contradicts the theory at issue, from which we conclude that the theory is false. If the Zombie Possibility Thesis begs the question against the identity theory, then any argument against any theory begs the question, and if begging the question is a fallacy (as textbooks of logics usually tell us), then no nonfallacious argument against any theory is possible. If so, nonfallacious arguments for a theory are impossible too: any argument for a theory is an argument against all alternative theories and, therefore, begs the question against them, and, therefore, is fallacious. Thus, no nonfallacious argument is possible at all! On the other hand, if there are nonfallacious (not question-begging) arguments at all, then the fact that the Zombie Possibility Thesis contradicts the identity theory does not mean that the Zombie Possibility Thesis begs the question against the identity theory and that, therefore, an argument based on the Zombie Possibility Thesis is fallacious.

[Note: the part of Sepety’s argument which I fully concur with without my own qualification stops here.
I want to continue quoting him anyway. Just note that our positions don’t completely align past here.]

Now, let us consider things as they really stand on their “logical feet.” To begin with, there are several alternative theories about the mind–body relationship: several varieties of materialism (including the identity theory), dualism, idealism, and panpsychism. We look for arguments for and against these theories. Any such argument has premises. For an argument to be convincing, its premises should be plausible. Of course, their plausibility should be evaluated independently of the theories at issue. It would be really fallacious begging the question if we judge the plausibility of the premises of the proposed arguments by their consistency with our pet theory. On the contrary, we should (try as hard as we can to) begin with a neutral (with respect to the theories at issue) standpoint. To begin with, we do not know whether the identity theory, or interactionist dualism, or epiphenomenalism is true. Now, without assuming the truth of any of these theories, let us consider the question: are phenomenal zombies logically possible?

It seems that they are: imagine an exact atom-to-atom (or quark-to-quark, if you like) copy of your body, so that each atom of your zombie-twin is located and moves relative to its body exactly as the corresponding atom of your body relative to yours. There are all just the same physical interactions between atoms, and all just the same physical fields. Because neither atoms (quarks) nor physical fields experience anything, it is logically possible for all those processes to occur without any experiences.

If someone thinks otherwise, it is incumbent upon him/her to explain how those movements and interactions of atoms and physical fields can logically necessitate subjective experience. (A mere postulation of the identity of mental states with certain brain states does not count as an explanation.) If no such explanation is available, we should admit that there is no such logical necessity; that is, phenomenal zombies are logically possible. From this, we should proceed to what logically follows as to the truth/falsity of the theories at issue.

Unlike the Zombie Possibility Thesis, the identity postulate is not prima facie plausible at all. On the one hand, there are physical structures and processes―microparticles with certain spatial locations relative to one another that move (change their spatial locations) in certain ways and influence one another’s movements (interact according to the laws of physics) and physical fields (which are spatially distributed and changing with time in a law-abiding manner dispositions of influencing the movements of physical bodies―in particular, of microparticles). In all this, nothing implies (logically necessitates) subjective experiences. On the other hand, there are subjective experiences―what it is like, how it feels for a person to have a certain experience. The postulate that subjective experiences are identical with some physical structures and processes, as a mere postulate, without a substantial explanation of how subjective experiences can be identical with some physical structures and processes, is not merely prima facie implausible, but downright unintelligible.”

I found this article after writing my own entry (IV) — The Case of the Lunatic Fish. But in that entry, I expressed the point Sepetry concurs with in my own way: “ … circularity and ‘begging the question’ are not fallacies of thought, but fallacies of argument. An argument is circular and will ‘beg the question’ if it contains premises which will be seen with equal skepticism by someone who is skeptical of the conclusion of that argument for the same reasons they are skeptical of the conclusion. This is classified as a fallacy because the goal of an argument is to prove that skepticism wrong to the satisfaction of the skeptic—so an argument that begs the question fails at this task because it merely repeats implicitly the conclusion the skeptic doesn’t want to accept as one of its assumptions. The fact that the question-begging argument fails to objectively disprove the skeptic doesn’t mean, however, that any train of thought that is circular is either false or irrational for an individual to accept. The real question worth asking is: “Is this circle making contact with reality?”

And there very well may be true statements which we absolutely cannot, in principle, support in any way without at some point “begging the question.” To return to a previous example, … solipsism …. … You absolutely know without a shred of doubt that he is absolutely wrong—and yet, you just as absolutely have no conceivable way of “proving” it to him with any sound, non–circular argument. Appropriately, the example of solipsism deals (in different ways) with the same subject matter addressed in philosophy of mind: private subjective experience. The solipsist denies its existence anywhere but in the one case he experiences immediately and directly—his own. For this, the solipsist is universally considered absurd. Yet the eliminative materialist goes on to not only do that, but to deny it in even the one case he actually experiences indisputably, immediately and directly for himself—and for this he’s respected enough to publish in prestigious philosophy journals.”

In any case, I do think we can go one extremely important step further from here—I wanted to let Sepety speak first, because simply dismissing the argument as “question begging” is problematic in its own right and in a way well worth addressing on its own terms; because even if it was, that actually still wouldn’t be a reason to cast it aside as trivial or uninteresting. If all that we actually had here in the end was something like: [(intuition A) ⇆ (valid logic) ⇆ (intuition B)], then this can still be a valuable way to set the tone for a discussion over whether (intuition A) or (intuition B) is more plausible—for asking: which do we have more reason to accept, given that accepting it would mean rejecting the other? In the worst case scenario, if we couldn’t strictly prove either premise true, we still would have established that these are the two premises one has to choose between—and even if we couldn’t find any objective grounds by which to establish the truth of one or the other, then we would still have established that it comes down to a matter of intuition. The materialist wouldn’t be able to say against the zombie arguer that the zombie arguer is wrong unless he could prove his own choice of intuition true—he could only say “we begin from different starting points, because my gut instincts lead me to favor intuition B over intuition A, and yours lead you to start from intuition A. But who knows which of us is right? Is there any way we can find some neutral territory on which to settle the question, or is it really just so up in the air?” If the intuition that zombies are logically conceivable is, by default, to be considered untrue until proven true, then the same standards would have to go likewise for the so–called  “identity theory.” Otherwise, we’re just rigging the courts by treating one suspect innocent until proven guilty and the other guilty until proven innocent. And that wouldn’t be any fair trial.

But Sepety doesn’t make the point nearly strong enough: the “identity theorist” materialist does not actually think that causal mechanical properties and instances of subjective conscious experience really are just literally identical. Not even by his own lights! Consider: the materialist himself thinks that atoms (or substitute whatever microphysical entities you like) do—or at least could! which is the only premise we actually need in order to insist on logical conceivability of their conceptual separability—possess causal mechanical properties without having subjective conscious experiences. So this premise really doesn’t even beg the question in the weak sense against the identity theory, because the “identity theorist” himself is in fact caught in an internal contradiction, whether he recognizes it or not. The only option that the materialist actually has here is to say that it is some other property of the material entities in our world besides their purely physical geometric structural properties and blind physical dispositions towards various inert patterns of motion through physical space in virtue of which our subjective conscious experiences can be intelligibly supposed to appear.

First, this—again—simply is not contrary to what the zombie argument as such actually aims to establish. If the materialist has acknowledged the need to provide an account of this sort at all, the zombie argument has already succeeded at its quite modest aims and we’ve already moved beyond that argument itself. But second, my evaluation of this attempt to solve the question is as so: because the question of whether something is or is not a subjective conscious experience (distinct from the question of how complex or robust or detailed a given experience is) is an absolute binary, there is no way in principle to move in “steps” from something that is not an experience into something that is. So, the only possible candidate for what this “other property” could even conceivably be is in fact conscious experience itself—therefore, unless the “identity theorist” wants to take the alternative route of eliminating consciousness as we know it and convert his statement into something like “the brain (conceived of as a composition of entities with purely physical properties, as defined above) is identical to the brain” while leaving the mind out of it entirely, his only alternative option is to try to adopt panpsychism (which may or may not run into equally decisive problems of its own). Otherwise, the “identity theorist” himself is caught in an absolute self–contradiction given the fact that he himself fully (even if only implicitly) acknowledges that “physical properties” as defined here can in fact exist without subjective conscious experiences existing with them any time he supposes that microphysical entities can in fact possess these kinds of physical properties without therefore having subjective conscious experiences by logical necessity. (Note, however, that part of my own argument against panpsychism proceeds from the realization that the physical properties and the properties of subjective experience and intentionality still wouldn’t be “identical” even on panpsychism—as given by the fact that the panpsychist can still logically conceive of “zombie atoms!” Even universal concomitance wouldn’t constitute an “identity” claim. In a world with different chemical properties, it could have been the case that every molecule of (other–world) oxygen would universally come bound to two molecules of (other–world) hydrogen and never chemically decompose, and it could be naturally impossible for anyone to break them down—perhaps because there was nothing in this world other than (other–world) H2O; we could, perhaps, even imagine a world containing nothing besides one giant ocean—but this still wouldn’t make (other–world) hydrogen and (other–world) oxygen identical.)

Our ordinary concept of consciousness—derived from our immediate, first–hand acquaintance with it “from within” our experience as conscious centers of experience—simply contains ingredients which our concept of physical causation manifestly does not. An “identity theory” that simply declares the two “identical” is just literally incoherent—it literally does not even rise to the status of a position on philosophy of mind—it is “not even wrong.” (See the following section for further elaboration.) The only way to even try to begin to formulate it into a position at all is either by adding something fundamental to our ordinary concept of the “physical,” or by taking something fundamental away from our ordinary concept of “consciousness.” Simply declaring the two “identical” by fiat doesn’t even begin to attempt anything like this.

_______ ~.::[༒]::.~ _______

Of course, in my own writing up to here, I have made an explicit argument against the so–called “identity theory”—and the prevalence of well–ingrained misconceptions about the zombie argument is exactly why I made the strategic choice to simply go straight to those supporting arguments themselves, instead of presenting them explicitly in light of the zombie argument in particular: “ … the conceptual ingredients involved in efficient physical causation and the conceptual ingredients involved in subjective, qualitative, phenomenal, intentionalistic thought and experience simply are not identical. And providing an account which “identifies” them would require a conceptual unification of a sort that takes some third kind of phenomena and explains in those terms exactly how the concepts of subjective experience and physical causation are unified through it. To reiterate the analogy once again: to claim that the man who delivers my mail in the morning is identical to the man who drinks at the bar on Friday night is to take two spatiotemporally conceived events and then provide spatiotemporal terms that perform the actual substantive work required to actually link them in space and time—namely, it requires a story like this: “when the man who delivers my mail on mornings goes home, he changes clothes and heads out to the bar—and that is how the man who delivers my mail turns out to be the man who drinks at the bar on Friday night—discovering this additional fact is how know it turns out to be the same man at all.” If I don’t have an account like this, then I am simply not justified to declare that the two are the “same man.” And if I can actually see the two standing side by side at the same moment (as I can for my physical brain and my subjective stream of conscious experiences) and see that they very well don’t even look alike, no less, then the statement is just literally incoherent unless and until it gets a whole lot of justifying explanation.

In a proper account where two things that weren’t obviously identical at first later empirically turn out to be, a bridging spatiotemporal event links two other spatiotemporal events together in space and time; two events composed of the same basic category of ingredients are linked by an account which actually bridges them in the clearly explicable terms of that same exact ingredient. But without an actual bridge to actually connect these two things in common terms, calling them “identical” would simply be incoherent. I can potentially provide an account which “identifies” the man who delivers my mail in the morning with the man who drinks at the bar on Friday night, but I cannot even potentially provide an account which “identifies” the man who delivers my mail in the morning with the year 1977—the very terms involved in the two different concepts are simply different. And the notion of “identifying” subjective first–person qualitative experience with physical structure and causal process is a conceptual confusion more on par with the latter example than with the former, not merely because the two concepts are not prima facie the same, but because they are composed of such different basic conceptual ingredients that there are simply no common terms that could possibly perform the actual substantive function of actually bridging them. And it is clear on looking at them that no supporter of any so–called “identity theory” has ever actually attempted to pull off the required task. “Identity theories” do not in practice amount to surprising discoveries overturning ordinary intuition, but rather to basic conceptual confusions that come nowhere close to actually doing what they claim to do.”

Again, I think we can see that subjective conscious experience cannot be produced by any combination of blind inert causal interactions, in and of themselves, without something extra (whether that’s some extra mysterious properties possessed by the objects involved in these blind, inert causal interactions themselves, à la panpsychism; an additional set of types of laws, à la Chalmers’ proposal; or, as I argue, quite simply the fundamental phenomena of consciousness itself—a phenomena which does not “have” the properties of subjectivity, intentionality, etc., but rather quite simply “is” the un–quantifiable, qualitative phenomena of subjectivity and intentionality (etc.) extended across time—“I” am this temporally extended stream of subjective conscious experiences and intentionalistic thoughts, and this extended stream of experiences and thoughts is not “identical to” or “reducible to” anything other than itself. That, at least, is my position, and what you’re signing up to watch me defend in rebellion against the current zeitgeist if you should choose to follow me).

_______ ~.::[༒]::.~ _______

One partial problem with the way the argument is formulated is that it leads some readers to think that zombies might only be “possible” in the relevant sense if epiphenomenalism is true and the consciousness we experience plays no causal role in our behavior. Otherwise, so the reasoning goes, if we took consciousness out of the picture, then we wouldn’t have a behavioral duplicate of the human beings in our world—therefore, zombie duplicates of the people in our world aren’t really conceivable, and the argument from the conceivability of zombies fails.

I’ve argued that epiphenomenalism is not just implausible in the way that many think, but as decisively and conclusively refutable as anything could possibly be (see my essay (IV)). It is also quite obvious that I think the zombie argument is demonstrating something. What is going on here? Chalmers’ own most direct response to this peculiar point doesn’t help us much either: “…the possibility of zombies does not obviously entail epiphenomenalism. To see this, note that an interactionist dualist can accept the possibility of zombies, by accepting the possibility of physically identical worlds in which physical causal gaps (those filled in the actual world by mental processes) go unfilled, or are filled by something other than mental processes. The first possibility would have many unexplained physical events, but there is nothing metaphysically impossible about unexplained physical events.”

We … could insist on that premise, sure. But at least intuitively, it very obviously weakens the strength of the conceptual clarification we’re trying to make. And the key point is that I think we can get around it entirely, with a very simple tweak—and it turns out to be the same tweak that gets us around the other major set of unnecessarily technical, complicated objections that try to defeat some step of inference from the “conceivability” to the “metaphysical possibility” of zombies.

Such arguments may, for example, take some condition which we seem to be able to conceive—and then point out that, despite first appearances, it turns out that things we (supposedly) thought we could conceive of being true actually are not logically possible after all. Is 7,741 a prime number? What about 7,742? We don’t know, therefore—the argument goes—we can conceive of either being prime or not prime. But it turns out that 7,741 is prime, while 7,742 is the product of 72 x 98—therefore it is logically impossible for 7,742 to be prime, and logically impossible for 7,741 not to be. And therefore, conceivability is a useless guide to logical possibility.

Now, I could follow along with this argument and counter by denying that I actually can “conceive of” 7,742 being a prime number or of 7,741 not being a prime number in any meaningful sense at all. I could make still yet some other fresh new irritating distinction between “epistemic conceivability,” defined as the mere “ability to imagine that it might turn out to be the case that…,” and “robust conceivability” defined as “the capacity to actually hold two concepts clearly in mind and actually imagine them all the way through with or without each other”—and then say that I can robustly conceive of the notion of a philosophical zombie, whereas it is only “epistemically conceivable” for me before discovering the answer that 7,742 might be prime or that 7,741 might be composite—and then I can deny that what goes for epistemic conceivability in cases like these goes for robust conceivability as it applies in the zombie case.

But again: why even go there? Once again, I think we can get around all of these kinds of objections and wipe a great deal of technical obfuscation out of the argument in one fell swoop. Instead of arguing that “zombie worlds are conceivable” and then trying to justify a modal premise to allow us to go from “conceivability” to “metaphysical possibility” (or whatever), I can simply say the following: “If the premises of materialism were true (that the world is, at root, a blind process of inert forces—whatever the details of their structural composition or how they might causally operate—evolving through sheer causal mechanism; in other words, if whatever the “bedrock” ingredients of reality are, they intrinsically lack subjective experience or intentionality), it would follow that consciousness of the sort that we experience immediately could not have appeared out of such ingredients. In other words, the premises of materialism predict the non–existence of the sort of consciousness we experience. But since we do have the sort of consciousness we know first–hand that we experience, then, the premises of materialism are falsified by its existence.” 

Obviously, these premises would need to be justified. I think they can be—in exactly the ways I have summarized above, and elaborated on in extensive detail across this series. But the places where this argument would need support are just exactly the same places that the ordinary formulation already does; and notice that, for example, there is no basis anywhere in this presentation of the argument for anyone to even try to charge that it could only be sound if epiphenomenalism were true. On this formulation, if it follows that taking consciousness away would change our behavior, all this would mean is that materialism would be falsified both by the existence of the kind of consciousness we experience, and from the behaviors which we are capable of only because of its existence. This puts the dialectic back where it actually belongs (where it is, as I have explained, materialism which struggles to avoid either eliminating the mental or rendering it epiphenomenal as such)—and simply has no need to invoke any abstract, unintuitive modal premises whatsoever. This formulation obviously isn’t without its own need for complicated defense, but I think that defense can be provided—and as I see it, it takes away nothing that Chalmers’ formulation of the argument accomplishes, but it does save us a whole hell of a lot of wasted time by efficiently avoiding more than one redundant and exasperating detour that the argument as it is most commonly formulated so frequently ends up tied up in. We could address arguments like the epiphenomenalist critique and critiques of the step from “conceivability” to “possibility” on their own terms, and perhaps they do fail either way—but if we don’t need to, why even muddle the debate by adding them in? This reformulation focuses more clearly on the premises which actually form the crux of the debate: whether we can get subjective conscious experience and intentionality out of inert causal mechanism; and if not, what exactly we would need to add to the picture in order to be able to get it.

Bonus footnote: Philip Goff, Why Physicalists Have More to Fear from Ghosts than Zombies, which argues that Descartes’ original arguments for the conceivability of disembodied consciousness (the supposedly emergent macro–phenomena without its supposedly micro–reductive base) actually does have certain virtues in some instances, against a certain kind of physicalist position, over Zombie arguments for the conceivability of unconscious bodies (the supposedly micro–reductive base without the supposedly emergent macro–phenomena).

Consciousness (VIII) — Breaking (Down) Bad (Philosophy of Science)

(Note: this entry is still in rough draft form.)

It’s incredibly important that we try to stay clear on just what exactly the relationship is between philosophy and science if we want to properly understand either of them. A rudimentary misunderstanding of the relationship is frequently demonstrated when a critic of a philosophical proposal asks whether it makes any testable, falsifiable predictions—as if under the impression that any philosophical idea is nonsense unless it does what scientific theories do and makes concrete, empirical predictions about the future that could be tested and proven wrong. This not only fails to understand how philosophy and science stand in relationship to each other as will be discussed momentarily, it fails to understand the very philosophy of falsificationism—in a way that its founder, Karl Popper himself, objected to repeatedly within his own lifetime.

What Karl Popper actually did was propose falsification as a way to answer the question, “How do we draw the line between the particular domain of inquiry which is called ‘science’ and others?” What Karl Popper did not do was propose falsification as a way to answer the question, “How do we draw the line between statements that mean something and statements that don’t?” In Chapter 1, Section 6, and footnote 3 of The Logic of Scientific Discovery, Popper writes: “Note that I suggest falsifiability as a criterion of demarcation, but not of meaning. Note, moreover, that I have already (section 4) sharply criticized the use of the idea of meaning as a criterion of demarcation, and that I attack the dogma of meaning again, even more sharply, in section 9. It is therefore a sheer myth (though any number of refutations of my theory have been based upon this myth) that I ever proposed falsifiability as a criterion of meaning. Falsifiability separates two kinds of perfectly meaningful statements: the falsifiable and the non-falsifiable. It draws a line inside meaningful language, not around it.”

In a sense, “philosophy” is the term we use for the analysis of claims that attempt to “predict” why things are as they are right now, where some of the most fundamental disagreements are over what it is that these claims do, in fact, “predict.” My rejection of physicalism throughout this series, for example, rests on my reaching the conclusion through conceptual analysis that the premises of physicalism “predict” that it should be impossible for us to have the conscious awareness and intentionality that we know that we have, in principle, right here and now and is thus ‘falsified’ by their existence—and were I to debate this with a defender of physicalism, that debate would largely center: (1) on whether consciousness possesses the kinds of traits I describe it as possessing; or, since even most physicalists want to avoid eliminativism as even they acknowledge it to be self-defeating and absurd, (2) on whether or not the premises of materialism do in fact entail the “prediction” that the consciousness we experience should be incapable of possessing the particular aspects and dimensions we happen to know from the inside that it does.

Philosophy tries to account ‘backwards’ for why what we see happening now is happening (and what must be true in order for it to happen); science tries to project ‘forwards’ into what will happen later. While some suggest a picture on which philosophy is increasingly rendered irrelevant by, as it concedes ground to, an inevitably advancing science answering what we previously were resigned to think were just “armchair” considerations, in a sense the truth is just exactly the opposite: every time a scientific advancement projects forwards and increases our ability to predict what will happen, the “what will happen” just gets included into our “what we see happening now,” and it is left to philosophical consideration to form any interpretation at all of why what we see happening is happening—what would need to be true about the ultimate and underlying nature of reality in order for it to be possible, to begin with, that what happens can happen.

One of the places this is currently most obvious and easiest to see is in discussion of how to interpret quantum physics: Do we follow Von Neumann and conclude that the acts of observation from consciousness itself are what causes the “wave function” to collapse into a single determinate observation? Or do we follow Hugh Everett and conclude that the “wave function” never truly collapses at all; and that, rather, every possibility included within it represents a variety of universes all branching off simultaneously into an expansive multiverse from the original point of “collapse”? Or do we follow Niels Bohr and Werner Heisenberg and say that the “wave function” is just a theoretical construct that doesn’t signify anything other than our own epistemic ignorance?

Most thinkers of any degree of sobriety allow, that an hypothesis…is not to be received as probably true because it accounts for all the known phenomena, since this is a condition sometimes fulfilled tolerably well by two conflicting hypotheses…while there are probably a thousand more which are equally possible, but which, for want of anything analogous in our experience, our minds are unfitted to conceive. ([1867] 1900, 328) ~ John Stuart Mill, A System of Logic 

“Science” isn’t going to answer those question for us. In as much as “science” refers to “the practice of following the scientific method,” it means testing and refining hypotheses that entail empirical consequences in order to more accurately predict future empirical observations. The problem is that, by definition, all of the above interpretations of quantum physics entail the same empirical consequences. They all account for the same data—in different ways. Further discoveries in physics may end up answering that question, but if they do, it will only be by changing the details and handing over a new set of facts that it will be just as much left up to us, yet again, to interpret and hang together into a cohesive picture. (Perhaps an argument could be made that it won’t be, and that a completed physical account will necessarily only have one possible interpretation, but unless and until that actually happens, that argument too will necessarily also be a philosophical one.)

_______ ~.::[༒]::.~ _______

Similarly, if we have a theoretical model of how the world works that allows us to have success in making further predictions, that no more proves that the theoretical entities suggested by that model are real despite being beyond the direct reach of our senses than the success of mathematics in allowing us to build things and predict their behavior proves that mathematical entities, too, exist in some literal realm beyond the direct reach of our senses (a position some do in fact defend: mathematical Platonism). And this is just the most basic of conundrums raised by philosophy of science—the most obvious of implicit contradictions resting in our beliefs before we’ve analyzed them philosophically and made a conscious effort to reshape them into something consistently coherent. Most of us will naturally want to accept that whatever entities are postulated by our best physical theories really exist, whether we are capable of directly observing them or not. Yet most of us—particularly the physicalists—won’t want to accept that mathematical entities exist in some literal fashion just because mathematics, which can be applied to such incredibly useful purposes, refers to them. There isn’t any immediately obvious answer here: if the entities posited by our most useful theories should be assumed to be real, then why shouldn’t mathematical entities rise with physical entities?

On the other hand, if we reject that underlying premise, then why shouldn’t really–existing physical entities fall along with mathematical entities? “Science” as the discipline of honing and refining predictions about future observation is simply not going to answer that question. This is a philosophical premise underlying our practice of science one way or another; held by us, and not the discipline of science itself. Philosophy deeply underlies even the most ordinary assumption that the scientific postulate of atomic forces as an explanation for empirical observations truly implies that the atom is even real. I repeat: no one has ever directly observed the existence of an atom. The idea is a hypothesis reached by “inference” to account for things like, for example, certain properties of the periodic table.

But is the fact that scientific theories are so effective at getting us places obvious proof that this could only be because the entities they describe are real? Absolutely not—even without adding philosophical analysis into the mix, it is scientifically well confirmed today that Newtonian mechanics most adamantly does not describe the world ‘as it is’—its conjectures about the underlying nature of how the world most essentially ‘is’ and works have been fundamentally superseded by the advances of quantum mechanics (“As experiments reached the atomic level, classical mechanics failed to explain, even approximately, such basic things as the energy levels and sizes of atoms and the photo-electric effect”). And yet, we can still use principles derived from its assumptions with all kinds of success in fields like engineering, celestial mechanics, and so forth. Just as there are mathematical Platonists who reason that the success of mathematics entails the real existence of mathematical entities in realms we can’t observe, so there are “scientific anti–realists” who reason that the success of physical science simply does not entail that we have any real descriptions of any really existing physical micro–entities just because our theories are useful. The dilemma can go either way.

Perhaps scientific anti–realism is false. I’m perfectly well content to accept that it is—if convinced by the right kind of argument. My actual point is far more basic than even that; my point is this: if anti–realism is false, proving it false is going to require philosophical defense and explanation. It is not and will never in principle be solved by an “experiment” that makes a falsifiable prediction in a laboratory. If scientific realism is true, scientific realism itself is not a fact proven by scientific data. It is an answer to a question about how we should interpret that very data, when both realism and anti–realism about scientific theory are each making the attempt to philosophically defend the claim that they more adequately and naturally predict whatever data see in front of us than the other. In an important sense, again, it is a question of reasoning “backwards” to ask what premise most adequately predicts and accounts for what we know is in front of us right now, rather than making more predictions about what we will see in the future (which, if confirmed, will just be added to a new collection of “what we know is in front of us right now.”) Both types of questions are relevant and important. And they require categorically different kinds of methods to address, because they are categorically different types of questions. 

_______ ~.::[༒]::.~ _______

P. M. S. Hacker, who along with the distinguished neuroscientist Max R. Bennett co–authored a volume titled Philosophical Foundations of Neuroscience, writes that “Philosophy … is neither an empirical science nor an a priori one, since it is no science. … It is a quest for understanding, not for knowledge. … Philosophical questions cannot be circumscribed by their form. Nor can they be circumscribed by their content, since they can, in principle, be concerned with any subject matter at all – any subject matter that gives rise to conceptual confusions and unclarities. These questions cannot be resolved by the empirical sciences, since they are not empirical questions. They are all questions that are, directly or indirectly, solved, resolved or dissolved by conceptual investigation. One might therefore say, as above, that, in one sense, philosophy has no subject matter; but one might also say that, in another sense, philosophy has everything as its subject matter.… Philosophy is conceptual investigation.” I highly advise anyone reading this to take a break and read Hacker’s paper as well. He continues: “This assertion can easily be misunderstood. Does it mean that philosophy has a subject matter after all – namely concepts? That would be misleading. Being a conceptual investigation does not mean being solely about concepts. … questions of whether machines can think or whether the brain can think are philosophical. Neither can be answered by experimental science. To deny that they are about machines, brains, and what it is to think would be misleading. But to suggest that they are not, in a very distinctive sense, about the concept of thinking and its intelligible applicability or inapplicability to machines and brains would be to grossly misrepresent the investigation.”

One expression of the attitude of scientism is that scientist’s forays into philosophy can become treated with an absolutely undeserved degree of deference when we fail to recognize that the statements being made are even philosophical—and thus something the scientist qua scientist simply has no automatic special authority over—rather than scientific to begin with. From the implied (and utterly fallacious) assumption that the raw data of scientific investigation comes pre–packaged, so to speak, with its own conceptual categorization and interpretation, we might naively assume that anyone who is an expert on investigation of the empirical aspect of some topic is therefore automatically an expert on understanding anything there could possibly be to understand about any aspect of the subject in question. If we hold this assumption, we are wrong.

In discussing the origins of the Universe in The Grand Design, Stephen Hawking and Leonard Mlodinow write: “Because there is a law like gravity, the universe can and will create itself from nothing.” Could Hacker’s definition of philosophy as “a quest for understanding” applicable to “any subject matter that gives rise to conceptual confusions and unclarities” be any more relevant? Could any ‘empirical’ knowledge which either Hawking or Mlodinow might have about the science of cosmology or the workings of the law of gravity make the statement that “the universe can and will create itself” (which the skeptical reader might be inclined to think it could not do unless it already existed)—“from nothing,” no less, because “there is a law like gravity” (which the astute may have noticed is not “nothing”) any more respectable or any less conceptually confused and in need of philosophical clarification? Could there be any clearer demonstration that ‘knowing’ the “facts” simply does not suffice to prove that you understand what they mean? Even the most accomplished scientists on Earth can fail utterly at the most rudimentary level of philosophical—that is, conceptual—comprehension if they aren’t being careful.

Consider an artificial intelligence researcher who adopts the line endorsed by Ray Kurzweil in the fifth entry in this series that thinking ‘just is’ the execution of syntactical procedures (so that the man in the Chinese Room would “understand” Chinese in the only sense worth talking about—and so, by definition, would any machine, by definition, the moment it became capable of ‘appearing’ to understand Chinese through a programmed ability to manipulate Chinese symbols). Suppose he goes on to apply the Turing Test to a variety of robots to see which “understand” according to the philosophical definition of “understanding” which he has accepted on which the only thing it entails is the ability to execute appropriate functions anyway. So far, so good: our researcher has defined his philosophical premises, and he has begun empirical investigations in light of them which he plans to read through the lens of them. I would argue (as I do in extensive detail in that entry) that this philosophical premise would be horrendously confused and wrong, but he would—at least—be keeping his philosophical premises and empirical findings in proper relation to each other.

Where our researcher would begin specifically committing the epistemological fallacy of scientism, however, is when he begins telling anyone who doubted that the machines he was testing “truly understood” Chinese just because they were passing the test that they were wasting his time because, if they think so, they “clearly don’t understand science and need to understand the Turing Test better.” The researcher’s fallacy would be to assume that the plain data of this investigation contains within itself the philosophical premise that all that it means to ‘think’ is just to possess the ability to manipulate symbols—but this premise is completely external to his empirical investigations and requires an entirely different type of defense. What our researcher would be missing is that someone could perfectly well understand exactly what the Turing Test involves and exactly what our researcher’s data was revealing, and still disagree with that philosophical premise. And no amount of “data” drawn from experiment could settle the truth or falsity of it one way or another.

Our researcher would, of course, immediately recognize this fallacy were he to see it committed by someone who does not share his particular premises and assumptions. Imagine someone who adopts the philosophical position that everything is conscious in some degree (panpsychism) conjoined with the position that the “mind dust” of tiny particles become unified into the organized phenomenal consciousness of a singular mind whenever these particles become arranged to perform a function together. Now, suppose this panpsychist researcher went around testing plants, thermometers, etc. for their ability to perform unified functions, concluding on the basis of these tests that each entity either does, or does not, possess an organized singular “mind.” Again, so far so good: our researcher would be staking his philosophical premises out (however wrong we might think they are), and then conducting his empirical investigations, and so far apparently keeping them distinct and in proper relation to each other.

But if our panpsychist researcher were simply dismissing the skepticism of the artificial intelligence researcher because he doesn’t understand ‘science’, this would again be the same exact fallacy—and our artificial intelligence researcher would recognize it and realize immediately that his dispute with the panpsychist would not be over empirical results, but over philosophical premises. The question of whether a given physical entity is organized in such a way as to be capable of performing an organized function is one that can be answered empirically, sure; the question of whether capacity to perform a function is what it takes to have a singular organized conscious “mind” is not, and no result of the former experiment, in and of itself, would prove or even ‘support’ it—it requires a fundamentally different sort of analysis and defense altogether.

This example also goes to show that the materialist is not the only one who is capable of committing the epistemological fallacy of scientism; anyone who fails to grasp the actual relationship between philosophical premises and empirical findings and pretends that the latter contain and prove the former—no matter what the details might be—is committing it. The fallacy is to smuggle a peculiar conceptual interpretation of  some given bit of data into the data itself and then pretend that the data in question itself simply comes pre–packaged with that conceptual interpretation without any extra additional work, thus freeing the offender to shirk the obligation to defend these interpretations in the relevant philosophical terms and then dismiss anyone who questions them as “not understanding the scientific data” as an ad hominem means of dismissing anyone who questions the offender’s philosophical premises. The fallacy, in other words, is to engage in philosophy and then pretend not to have done so in order to shield one’s philosophical premises from the possibility of attack or any need for defense on the appropriate turf to which these premises actually do in fact properly belong—smuggling them in to escape these obligations illegitimately under the false guise of “science” when they are not actually “scientific” assumptions properly speaking at all.

_______ ~.::[༒]::.~ _______

A similar fallacy is committed whenever anyone assumes that neuroscience, as such, just straightforwardly proves that consciousness ‘is’ the brain. In his 1994 The Astonishing Hypothesis, for example, the neuroscientist Francis Crick writes: ““You”—your joys and your sorrows, your memories and your ambitions, your sense of identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll’s Alice might have phrased it: “You’re nothing but a pack of neurons.””

Incredibly enough, yet as so often happens, this very modern fallacy has already been addressed in detail by philosophers for decades, if not centuries—and the empirical findings of modern neuroscience, when we look at them a little more closely and in greater detail, actually add surprisingly little to what has already been said on the subject by very intelligent people for a very long time. As so often happens, philosophy—even ancient philosophy—not only still bears relevance; it takes down fallacies which are pervasive in modern assumption clearly.

We have, of course, addressed the conceptual issues involved in this claim already: the conceptual ingredients involved in efficient physical causation and the conceptual ingredients involved in subjective, qualitative, phenomenal, intentional experience simply are not identical. And providing an account which “identifies” them would require a conceptual unification of a sort that takes some third kind of phenomena and explains in those terms exactly how the concepts of subjective experience and physical causation are unified through it. To reiterate the analogy once again: to claim that the man who delivers my mail in the morning is identical to the man who drinks at the bar on Friday night is to take two spatiotemporally conceived events and then provide spatiotemporal terms that perform the actual substantive work required to actually link them in space and time—namely, it requires a story like this: “when the man who delivers my mail on mornings goes home, he changes clothes and heads out to the bar—and that is how the man who delivers my mail turns out to be the man who drinks at the bar on Friday night—discovering this additional fact is how know it turns out to be the same man at all.”

A bridging spatiotemporal event links two other spatiotemporal events together in space and time; two events composed of the same basic category of ingredients are linked by an account which bridges them in the clearly explicable terms of that same exact ingredient. But without an actual bridge to actually connect these two things in common terms, calling them “identical” would simply be incoherent. I can potentially provide an account which “identifies” the man who delivers my mail in the morning with the man who drinks at the bar on Friday night, but I cannot even potentially provide an account which “identifies” the man who delivers my mail in the morning with the year 1977—the very terms involved in the two different concepts are simply different. And the notion of “identifying” subjective first–person qualitative experience with physical structure and causal process is a conceptual confusion more on par with the latter example than with the former, not merely because the two concepts are not prima facie the same, but because they are composed of such different basic conceptual ingredients that there are simply no common terms that could possibly perform the actual substantive function of actually bridging them. And it is clear on looking at them that no supporter of any so–called “identity theory” has ever actually attempted to pull off the required task. “Identity theories” therefore do not amount to surprising discoveries overturning ordinary intuition, but rather to basic conceptual confusions that come nowhere close to actually doing what they claim to do.

So in practice, an “identity theory” would therefore either have to entail eliminativism towards the aspects of consciousness we know directly and immediately from the same first–hand experience we know everything else through, and the same first–hand experience which is the only thing we know anything else through (thus “identifying” the brain’s physical processes with something other than subjective, qualitative, intentionalistic consciousness and thus in reality just denying the latter’s actual existence outright altogether), — or else it would have to ‘build’ subjective experience and intentionality out of nonintentional and nonexperiential ingredients (but this approach necessarily fails in principle too, as explained in my essays IV — and V), — or else it could try to redefine the physical to say that it intrinsically contains these very ingredients within itself in already live and present form at the deepest levels of reality, as in panpsychism, and “identify” mind with a redefined “brain” that way (but we see in VII that this merely runs into one of the same exact fallacies already plaguing the other accounts, rejection of which was the only reason we ever even considered panpsychism as a potential solution to begin with).

_______ ~.::[༒]::.~ _______

But I don’t just want to address the claim that mind–brain “identity theory” is true, here—I want to address the particular epistemological fallacy involved specifically in the claim that neuroscience proves it. This fallacy was, in fact, addressed all the way back in 1889 by the pre–eminent American philosopher, psychologist, and physician Wiliam James: “When the physiologist … pronounces the phrase, ‘Thought is a function of the brain,’ he thinks of the matter just as he thinks when he says, ‘Steam is a function of the tea–kettle,’ ‘Light is a function of the electric circuit,’ ‘Power is a function of the moving waterfall.’ In these latter cases the several material objects have the function of inwardly creating or engendering their effects, and their function must be called productive function. Just so, he thinks, it must be with the brain. Engendering consciousness in its interior, much as it engenders cholesterin and creatin and carbonic acid, its relation to our soul’s life must also be called productive function. …

… But in the world of physical nature productive function of this sort is not the only kind of function with which we are familiar. We have also releasing or permissive function; and we have transmissive function. The trigger of a crossbow has a releasing function: it removes the obstacle that holds the string, and lets the bow fly back to its natural shape. So when the hammer falls upon a detonating compound. By knocking out the inner molecular obstructions, it lets the constituent gases resume their normal bulk, and so permits the explosion to take place. In the case of a colored glass, a prism, or a refracting lens, we have transmissive function. The energy of light, no matter how produced, is by the glass sifted and limited in color, and by the lens or prism determined to a certain path and shape. Similarly, the keys of an organ have only a transmissive function. They open successively the various pipes and let the wind in the air–chest escape in various ways. The voices of the various pipes are constituted by the columns of air trembling as they emerge. But the air is not engendered in the organ. The organ proper, as distinguished from its air–chest, is only an apparatus for letting portions of it loose upon the world in these peculiarly limited shapes.

My thesis now is this: that, when we think of the law that thought is a function of the brain, we are not required to think of productive function only; we are entitled also to consider permissive or transmissive function. And this the ordinary psycho–physiologist leaves out of his account. …

… Isn’t the common materialistic notion vastly simpler? Is not consciousness really more comparable to a sort of steam, or perfume, or electricity, or nerve–glow, generated on the spot in its own peculiar vessel? Is it not more rigorously scientific to treat the brain’s function as function of production? … The immediate reply is, that, if we are talking of science positively understood, function can mean nothing more than bare concomitant variation. When the brain–activities change in one way, consciousness changes in another; when the currents pour through the occipital lobes, consciousness sees things; when through the lower frontal region, consciousness says things to itself; when they stop, she goes to sleep, etc. In strict science, we can only write down the bare fact of concomitance; and all talk about either production or transmission, as the mode of taking place, is pure superadded hypothesis, and metaphysical hypothesis at that, for we can frame no more notion of the details on the one alternative than on the other. Ask for any indication of the exact process either of transmission or of production, and Science confesses her imagination to be bankrupt. She has, so far, not the least glimmer of a conjecture or suggestion—not even a bad verbal metaphor or pun to offer. Ignoramus, ignorabimus, is what most physiologists, in the words of one of their number, will say here.

… Into the mode of production of steam in a tea–kettle we have conjectural insight, for the terms that change are physically homogeneous one with another, and we can easily imagine the case to consist of nothing but alterations of molecular motion. But in the production of consciousness by the brain, the terms are heterogeneous natures altogether; and as far as our understanding goes, it is as great a miracle as if we said, Thought is ‘spontaneously generated,’ or ‘created out of nothing.’ … The theory of production is therefore not a jot more simple or credible in itself than any other conceivable theory. It is only a little more popular. All that one need do, therefore, if the ordinary materialist should challenge one to explain how the brain can be an organ for limiting and determining to a certain form a consciousness elsewhere produced, is to retort with a tu quoque, asking him in turn to explain how it can be an organ for producing consciousness out of whole cloth.”

James expresses more conceptual clarity and insight here about what findings of mind–brain correlation (“concomitance”) would or would not actually prove well over a century ago than many have who know more about the details of what those “concomitances” are than James ever possibly could have. The point couldn’t be more elementary—indeed, it may at first seem anticlimactic that it is as simple as it is. Yet one of the most basic rules of, say, population studies in nutritional science is that correlation does not equal causation. If we want to read causation out of a nutritional population study, we have to acknowledge that we are interpreting that data and be highly careful about our assumptions and the reasoning we follow them through with. But the data itself just does not plainly ‘give us’ causation—we have to interpret it and make further inferences to try to get at causation, and this is additional work that can’t be simplistically achieved just by acquiring more empirical data on the same correlation we’re trying to interpret. And what goes for properly understanding what (if anything) nutritional science might have to say about how we should eat if we want to be healthy goes every bit as much for properly understanding what (if anything) neuroscience might have to say about the true nature of “consciousness” or of the “self.”

A critic might ask where the “evidence” for such a hypothesis is—and if so, he once again utterly misses the point: either we should say that there is none, but realize that by the same token there would be no “evidence” for the “productive hypothesis” either—or else we should say that the “evidence” for it found in correlations between states of the brain and states of subjective experience is just exactly the same data claimed as “evidence” for the “productive hypothesis.” The point is that data of this sort is open to interpretation. And it takes philosophical, conceptual analysis—“concerned” in Hacker’s terms “with what does or does not make sense”—to decide how to interpret it. More of the same data we’re asking how to interpret in the first place can’t settle the question any more than collecting ever increasing amounts of data on the correlation between ice cream consumption and the murder rate can settle the question of how it makes the most sense to assume the two are causally related (if at all). The required sort of conceptual analysis is exactly what I have been aiming to offer throughout this series.

_______ ~.::[༒]::.~ _______

The common conviction is of course that neuroscience empirically validates the claim that the mind “just is” the brain. Yet, if any investigation seems to potentially support or require some non–materialistic interpretation of the relationship between the mind and the brain (say, apparent experiences of seeing one’s body from outside during Near Death Experiences), these potentially empirical findings are most often rejected immediately out of hand purely because of the a priori assumption that consciousness (or the brain) just can’t work that way. But wait—on exactly what basis was that claim supposed to have been justified in the first place? The empirical findings of neuroscience?

The circularity in this way of reasoning runs deep; and a similar dynamic is noticed by David Chalmers in regards to interpretation of quantum physics and questions about where those interpretations end up leaving the mind when he writes: “It is interesting that philosophers reject interactionist dualism because they think it is incompatible with physics, whereas [quantum] physicists reject the relevant interpretations of quantum mechanics because they are dualistic!” Which is it, then, that actually comes first? And where should we actually start? I might also quote the philosopher Lawrence BonJour here when he says that: “One of the oddest things about discussions of materialism is the way in which the conviction that some materialist view must be correct seems to float free of the defense of any particular materialist view. It is very easy to find people who seem to be saying that while there are admittedly serious problems with all of the specific materialist views, it is still reasonable to presume that some materialist view must be correct, even if we don’t know which one.”

What is left of the substance of the claim of materialism?

One might have thought that given the intensity with which the belief is often held, there was at least some strongly compelling argument someone had come up with by now—even if only as an after–the–fact rationalization—for either the conclusion that some form of materialism must be true, or that dualism must be false. There turns out to be far less than the tenacity of popular conviction in the belief might have led us to expect—and even the materialists can frequently be found, in various forms, admitting it. As we previously saw, Daniel Dennett admits in Consciousness Explained that he holds the “apparently dogmatic” rule that dualism is to be avoided “at all costs” even though he does not think that he “can give any knock-down proof that dualism […] is false or incoherent.” And he continues holding to this (only “apparently” dogmatic) rule even as it pushes him towards the conclusion that it must lead him to dismiss the very existences of experience and intentionality altogether as mere fictions. So, as we have seen, does Alex Rosenberg in The Atheist’s Guide to Reality. 

Consider John Searle’s puzzlement over the obvious absurdity of so many of the popular views in philosophy of mind: “No one would think of saying, for example, “Having a hand [should] just [be defined as] being disposed to certain sorts of behavior such as grasping” (manual behaviorism), or “Hands can be defined entirely in terms of their causes and effects” (manual functionalism), or “For a system to have a hand is just for it to be in a certain computer state with the right sorts of inputs and outputs” (manual Turing machine functionalism), or “Saying that a system has hands is just adopting a certain stance toward it” (the manual stance).” In Rediscovering the Mind, he writes: “How is it that so many philosophers and cognitive scientists can say so many things that [are] obviously false? … Acceptance of the current [physicalist] views [in philosophy of mind] is motivated not so much by an independent conviction of their truth as by a terror of what are apparently the only alternatives. That is, the choice we are tacitly presented with is between a “scientific” approach, as represented by one or another of the current versions of “materialism,” and an “unscientific” approach, as represented by Cartesianism or some other traditional religious conception of the mind.”

Fear of religion? Thomas Nagel had something to say about that in The Last Word in 1997: “Even without God, the idea … that the relation between mind and the world is something fundamental makes many people in this day and age nervous, I believe this is one manifestation of a fear of religion which has large and often pernicious consequences for modern intellectual life. In speaking of the fear of religion, I don’t mean to refer to the entirely reasonable hostility toward certain established religions and religious institutions, in virtue of their objectionable moral doctrines, social policies, and political influence. Nor am I referring to the association of many religious beliefs with superstition and the acceptance of evident empirical falsehoods.  I am talking about something much deeper—namely, the fear of religion itself. I speak from experience, being strongly subject to this fear myself: I want atheism to be true and am made uneasy by the fact that some of the most intelligent and well-informed people I know are religious believers. It isn’t just that I don’t believe in God and, naturally, hope that I’m right in my belief. It’s that I hope there is no God! I don’t want there to be a God; I don’t want the universe to be like that. My guess is that this cosmic authority problem is not a rare condition and it is responsible for much of the scientism and reductionism of our time.”

Is it a coincidence that, by and large, the only kinds of people who see reason to advocate views of this sort are people openly identifying with and representing “atheism”—one of the “Four Horsemen of the New Atheism,” the author of “The Atheist’s Guide to Reality?” It begins to seem entirely plausible that the atheist doesn’t want to face up to the depth of the problems with physicalist accounts because this would have to result in the conclusion that those religious conceptions of the mind perhaps weren’t so far off the mark after all—and granting even that much could just be too much for the atheist to bear. Here lies the problem, which I’ve already alluded to in entry (I): the atheist most typically wants to present atheism as the “default” epistemic position; as a mere “lack” of belief, and not a positive philosophy—comparable to simply lacking the belief that there is a teapot orbiting the moon (as in a popular analogy coined by Bertrand Russell). Yet, if it turns out that advancing a consistent atheism does in fact require advancing a specific positive philosophy—that is, physicalism about human minds—then atheism might begin to look more like a positive worldview which carries the epistemology and therefore all the burdens of a positive “religious” worldview than the ‘negative–default’ atheist had hoped. (But note that I will analyze the antecedent of this conditional in much greater detail at some later point.)

Why stay awake at night wondering how to fit consciousness as you directly know and experience it into a theory you’ve invented about the nature of the world when you can just settle all cognitive dissonance securely in advance by pretending the issue is settled and convincing yourself that you have the authoritative weight of science unquestionably on your side? I think we can acknowledge that “fear of religion” is one of the most ultimate reasons for the materialist prejudice without implying that this fear is necessarily valid. On the one hand, James P. Moreland argues that “…there has been a connection both historically and theologically between the existence of a substantial soul and the supernatural realm. If the soul exists, then this is very good reason to think that a personal, self–aware being—God—exists.”

On the other hand, when one of the most influential philosophers of the 20th century, A. J. Ayer—the founder of logical positivism who wrote his philosophical treatise at 26, and was certainly one of the most prominent atheists of the last hundred years—had a near death experience which he said “weakened [his] conviction that [his] genuine death, which is due fairly soon, will be the end of [him],” he went on to write that: “A prevalent fallacy is the assumption that a proof of an afterlife would also be a proof of the existence of a deity. This is far from being the case. If, as I hold, there is no good reason to believe that a god either created or presides over this world, there is equally no good reason to believe that a god created or presides over the next world … If our lives consisted in an extended series of experiences [e.g. across multiple afterlives], we should still have no good reason to regard ourselves as spiritual substances. … [and]  I continue to hope that [my genuine death] will be [the end of me].

Notice how often we see the words “hope” here: Ayer even admits that he hopes that his death will be the end! Could that not be every bit as powerful a motivating force behind physicalism as hope for continuation of life after death might be for dualism? I think we can admit that the “fear of religion” identified by both Searle and Nagel plays a substantial role in the prejudice towards physicalism without taking any stance one way or the other on whether or not this fear is justified. Or at least without taking any stance on the question yet—I plan to explore it in more detail later. For now, let’s return to the arguments for and against dualism and materialism. William G. Lycan, a distinguished professor of philosophy at UNC, has written a valuable paper titled “Giving Dualism Its Due.” The paper, he tells us, is “an uncharacteristic exercise in intellectual honesty [which] grew out of a seminar in which for methodological purposes I played the role of a committed dualist….” 

He goes on: “I have been a materialist about the mind for forty years, since first I considered the mind–body issue. … My materialism has never wavered. Nor is it about to waver now; I cannot take dualism very seriously … I have no sympathy with any dualist view, and never will. … Being a philosopher, of course I would like to think that my stance is rational, held not just instinctively and scientistically and in the mainstream but because the arguments do indeed favor materialism over dualism. But I do not think that, though I used to. My position may be rational, broadly speaking, but not because the arguments favor it  … the standard objections to dualism are not very convincing; if one really manages to be a dualist in the first place, one should not be much impressed by them. My purpose in this paper is to hold my own feet to the fire and admit that I do not proportion my belief to the evidence. … Arguments for materialism are few. … J.J.C. Smart was perhaps the first to offer reasons …[he wrote that]: “[S]ensations, states of consciousness,…seem to be the one sort of thing left outside the physicalist picture, and … I just cannot believe that this can be so….  The above is largely a confession of faith….” …

… The materialist of course takes the third–person perspective; s/he scientistically thinks in terms of looking at other people, or rather at various humanoid bags of protoplasm, and explaining their behaviour. But the dualist is … in the first–person perspective, acquainted with the contents of her own consciousness, aware of them as such. Notice carefully that we need not endorse many of Descartes’ own antique and weird views about the mind … The point is only that we know the mind primarily through introspection. Duh! That idea may, very surprisingly, be wrong … [but] to deny it is a radical move.

…  suppose … that you are a Cartesian dualist. … There are nine objections to your view. Of course there are; any interesting philosophical view faces at least nine objections. The question is, how well you can answer them? And I contend that the dualist can answer them … respectably. … I shall start with the Interaction Problem … [and] what … is the problem? I believe it is that even now we have no good model at all for Cartesian interaction. … I agree that the lack of a good model is a trenchant objection and not just a prejudice. But … for one thing, the lack results at least partly from the fact that we have no good theory of causality itself. …  [Paul Churchland argues that] neuroscience explains a great deal and dualism explains hardly anything. But the comparison is misplaced. Dualism competes, not with neuroscience (a science), but with materialism, an opposing philosophical theory. Materialism per se does not explain much either.  … the objections [to dualism] are not an order of magnitude worse than those confronting materialism in particular. … The dialectical upshot is that … going just by actual arguments as opposed to appeals to decency and what good guys believe, materialism is not significantly better supported than dualism.” And none of this even addresses the arguments I have posed in this very series—except for footnote 3, where Lycan does in fact write: “For the record, I now believe that there is a more powerful argument for dualism based on intentionality itself: from the dismal failure of all materialist psychosemantics….” (See my essay (V) for a full explanation of why this argument is so forceful.)

_______ ~.::[༒]::.~ _______

In any case, this discussion should be sufficient to set the tone for the articles to come—consider this a general, broad overview introduction to the next category of articles. In these, I plan to explore in more detail the objections against dualism, especially those drawn from “science,” while including some supportable speculations on how to formulate a “working picture” of what dualism actually entails. I plan to explore studies claimed to have relevance for the question of free will in combination with a discussion of whether the concept of free will is coherent and possible, psychological disorders claimed to have relevance for the unity of personhood, objections against the possibility (or plausibility) of dualism from “causal closure,” and more. Having spent articles (I)—(III) defending the background possibility that dualism could be true, and having spent articles (IV)—(VII) explaining why I think we’re justified on a priori conceptual grounds to believe that it is, the next series will explore whether there are any overriding reasons sufficient to convince you—if you’ve followed me up to here—to believe it turns out ‘empirically’ not to be true after all.