Consciousness (V) — Thinking Thoughts About the “Aboutness” of Thought

Previous Posts:
(I) — Atheism, Science, Philosophy: The Origins of the Conflict
 (II) — Digging Up the Conflict’s Roots
 (III) — Does the World Pantry Stock More than Sugar?
 (IV) — The Case of the Lunatic Fish

 “In the beginning, there were no reasons; there were only causes. Nothing had a purpose, nothing has so much as a function; there was no teleology in the world at all.” — Daniel C. Dennett, Consciousness Explained

“[W]e are here discussing a fundamental and irreducible feature, or rather presupposition, of all thought, all conceptual activity and all action, and … our aim is simply to bring it to people’s notice, to try to help them grasp what we are talking about. It can only be noticed or grasped, or not; it cannot be further described or defined.” — Alan Gault and John Shotter on Intentionality in Human Action and Its Psychological Investigation (1977)

Physicalist interpretation of the world and human consciousness is often implicitly supported by the impression that whatever the case may be about “qualia” (that queer, overly abstract–sounding term for the one phenomena we could not know more intimately, indeed the one and only phenomena we ever know “intimately” at all, subjective conscious experience), everything else about the human mind can be explained with only the physicalist account’s conceptual resources. That is to say, even when the physicalist wants to grant that “qualia” may hang awkwardly in his picture of the nature of human consciousness and reality in a way he admits he doesn’t know how to resolve [1], he still expects that descriptions in terms of form, structure and motion and can at least explain everything that mind–brains do. 

The picture of the situation suggested by this is that the human entity can be understood as something like a computer—we just don’t know how to explain why experience comes along for the ride with its physical processes. However much wrapping up subjective experience within a physicalist account may be impossible in principle, the impression that it could be reasonable to expect that that could be done extends from the belief that physicalist accounts of mechanism can at least explain everything that mind–brains do—and ‘if it can explain everything that they do,’ the reasoning goes, ‘why shouldn’t it eventually be able to explain what they are?’

That’s the impression, at least. But is it so? A crucially important distinction needs to be made here between, one the one hand, asking whether we could replicate a particular process with a procedurally defined mechanism; and on the other hand asking whether that particular procedure and mechanism accurately describes the way that the process being replicated is actually performed in the real world. In principle, we could most certainly replicate anything that we witness any mind–brains in our world doing through procedurally defined mechanisms that don’t make reference to consciousness—in principle, for example, we could build robots who are programmed to do things like scream “Ow, I’m in pain!”—and substitute any behavior other than independent logical reasoning (for reasons that will hopefully later become clear) you like, and in principle we should be capable of designing a robot to perform that behavior,too.  However, this simply doesn’t entail that executing procedurally defined mechanisms which don’t make reference to, or involve the participation of, consciousness itself is how the human mind–brain actually does it.

In other words, the physicalist’ impressions about the otherwise viability of physicalism in explaining the ultimate nature of the human organism outside the question of “qualia” is aided by the fact that we seem to at least be able to imagine the functions which minds execute being performed without the active causal participation of consciousness itself—and this is what allows us to theoretically bracket off the “hard problem” of why conscious experience comes along with the causally eventful world from all the “easy problems” of detailing how it is that that causally eventful world operates—a distinction which even David Chalmers concedes in his challenges to materialism. But the problem is that even if we can imagine this—even if consciousness doesn’t seem metaphysically necessary for the performance of these functional operations (e.g.: even if they could conceivably exist even if consciousness didn’t)—consciousness may still be naturally necessary for the performance of these functional operations, nonetheless (e.g.: given the way our world actually is, consciousness is involved when these functional operations are performed in our actual world anyway, and those functions just aren’t going to happen otherwise in our world given the way our world is). Suppose in this Universe, I was born trapped inside a brick cube with no tool to get out besides a fiftyfive pound sledgehammer: a fiftyfive pound sledgehammer would not be a metaphysically necessary means of breaking out of the cube—in some other Universe where I was given something else, a twenty pound sledgehammer would have been fully sufficient for the job—but given the way our Universe as we know it was actually set up, I just wasn’t going to be breaking my way out of there any other way. And more than that: in any Universe where a fiftyfive pound sledgehammer is what I was given, the fiftyfive pound sledgehammer will be naturally necessary in every single one of them, however metaphysically necessary it isn’t. So this is a case where the lack of metaphysical necessity that the physicalist intuition is based on simply does not prove the lack of natural necessity that he actually needs, which his intuition actually needs to be based on in order to be relevant to supporting the claim that we can understand the nature of our own world by analyzing the nature of “easy problems” separately from the “hard” one. The fact that we can treat the functions of the “easy problems” as mere functions (supposing we really can) just wouldn’t prove that doing so was actually the best or even viable way for us to understand the nature of the reality we are actually in.

Of course, simply stating that it isn’t proven won’t get us very far. Are there reasons to positively think that it is not through the execution of procedurally defined mechanisms which don’t make reference to, or involve the direct participation of, consciousness itself that the human mind–brain does what it does? Yes—and in fact, we have already seen the clear hints of one in the discussion in the last entry on the self–detonating nature of epiphenomenalism: what is happening when I talk about experiencing a feeling of pain? In general, what is happening when I talk about the subjective dimensions of my experience in specific distinction from the physical properties that come along with them in general? While we might be able to program a robot to ask questions about philosophy of mind, or scream “Ow, I’m in pain!” without it actually knowing what it is that these terms mean, this is clearly not how it happens with us: clearly(*), we do these kinds of things because we subjectively have the actual experience of pain, and because our comprehension of the conceptual difference between subjective experience and physical phenomena is what allows the problems of philosophy of mind to occur to us. Hence, it is apparent on the basis of this alone that conscious experience itself is playing some role in even the functions that our mind are seen to perform in the world. And with no barrier left in principle to the idea that consciousness as such plays a role in the function of human minds, the only questions remaining are the empirical ones of “what roles?” and “to what extent?” 

(*) When I say “clearly,” I do not mean that intuition alone is sufficient to prove this to be true. I refer, rather, to the refutation of epiphenomenalism in the previous entry.

Intentionality is a concept which can help expand our understanding of the role that conscious experience as such plays in the functioning and operation of the human mind. Once again, as with “qualia,” what we have in intentionality is a word that refers to a wide class of phenomena that are related by some very subtle aspect which they possess in common, which it is—every bit as much as before—extremely difficult to pick out in ordinary language even though it should eventually be easy to see that it refers to something with which all of us could not be more directly and immediately familiar, once the concept becomes clear.

Intentionality refers to a class of phenomena ranging from the fact that when you desire something, your desire is “for” that thing—to the fact that when you believe something, your belief is “in” the truth of an idea—to the fact that when you consider what to eat for breakfast, your thoughts are “about” ideas like food: there is something in terms like “of,” “for,” and “about” in phrases like these that emphasize a certain way that conscious experiences and intentions can be “representations of,” or “directed towards,” certain aspects of—or even towards currently nonexistent possible future states of—the world. This is going to turn out to be a kind of relationship that is not only difficult to explain in physicalist terms, but just as impossible for physicalism to account for in principle as we concluded was the case in the entry (IV) for subjective experience.

_______ ~.::[༒]::.~ _______ 

Intentionality can be difficult to think about in part because, in the flow of actual conscious experiences in reality as we know them, the elements of privacy/qualitativeness/subjectivity and intentionality are united simultaneously in one event. So, when you were thinking about private/qualitative/subjective experience in the previous post, you were probably also thinking about a state possessing intentionality without even realizing it. However, while the term intentionality picks out something that can be (and almost always is) a component of our qualitative, subjective experiences, not all qualitative subjective experiences possess intentionality. If we focus on these examples for a moment in isolation, the added element that intentionality brings to the table might be a little easier to see.

Imagine reaching a deep state of meditation in a sensory deprivation tank with your eyes closed, ears plugged, and so on in which you suppress all verbal or conceptual thought and focus wholly and single–mindedly on, say, a raw sensation of pleasure, without retaining any connected concept of the source of where that sensation is coming from. Deep enough in this state, you would be having an experience composed thoroughly of “qualia,” but possessing no intentionality: nothing that is “reflective of” or “directed at” any state or any aspect of the world outside of the experience itself. Similarly, you might become lost in deeply imagining something like an undifferentiated field of color, and the same could be said. Now, while these states would involve experiences “of” pleasure or color, we should be careful not to be misled by that language: there is no “of–ness” relationship in this example—the important point is that these experiences would be neither directed at, nor reflective of, anything outside of the experience itself. 

So, perhaps one helpful way to think about what intentionality adds to the picture is to imagine a physical duplicate of our world with all subjective experience stripped out, containing all and only physical phenomena: the organisms in this universe look like us, and act like us, but they don’t have any sort of inner experiences whatsoever. Now imagine that without adding such things as conscious beliefs about, desires for, intentions to, thoughts about or knowledge of anything, we add to this world only raw experiences like those just described. Just as comparing our world to an otherwise physical duplicate containing no conscious experiences at all helps us to obtain an intuitive mental grasp on what it is that conscious experience adds to the physical picture, so comparing it to a world possessing conscious experiences, but only those which fail to possess intentionality, may help us get a grasp on intentionality. Just as the “zombie world” cases help us to see that physical functions are not all that we need to explain about our world in order to understand consciousness, so an example like this should help us to see that it doesn’t stop at “qualia,” either.

Of course, we should realize that what we’ve just done is arrive again at the epiphenomenalist scenario from the opposite direction! A world of physical duplicates containing only experiences like that of our “deep meditator” could perform physical movements and perhaps even speak, but they could never do so because of their experiences: the only way the world could be like this is if experiences were just something that came alongside physical processes ‘for the ride’ in a sort of metaphysical back seat—without the capacity of thought to be “about” its conceptual contents which in turn are “directed at” states of the world, the inhabitants of this world would have no capacity to self–referentially acknowledge their subjective experiences as a phenomena within the world. When the phenomena of consciousness as a whole is construed by some materialists to be characterizable as a process with a physical capacity to make “self–referential” reports of their own states (see Higher Order Theories of Consciousness), this notion poaches intuitively plausibility for itself by smuggling intentionality in without clearly acknowledging and admitting it as such. What we have begun to demonstrate is that these aspects of consciousness in reality ‘go together:’ despite their abstract conceptual differences, arguments that work for one generally work for others at the same time.[2]

Just as the non–mechanistically describable phenomena of private subjective experience itself appears to be fundamental to the world and a part of the explanation of some of our behaviors (demonstrated by the fact that we ever think and talk about subjective experience, in conceptual distinction to any accompanying physical processes, at all), a powerful case can be made that intentionality—this capacity of consciousness to “direct itself towards” concepts and ideas which are “reflective of” things beyond themselves—is simultaneously and equally fundamental (and also part of what we’re exhibiting when we “direct” our thoughts “towards” and then express ideas “about” those conscious experiences). Just as we have argued previously that materialist accounts must either call a subjective experience and a physical process “identical” (or do this in effect under the slightly different, and in my analysis simply more misleading, terminology of “emergence”) or else eliminate the phenomena altogether—and argued that if conscious experience can neither be reduced to and explained in the terms of non–conscious physical ingredients nor eliminated, then its existence simply refutes the materialist hypothesis (and something “non–physical” turns out not only to exist, but to be the one and only thing whose existence we know directly, and which turns out to be the one window through which we infer anything else we may believe at all, as a result of how the materialist account chooses to define “physical”)—so the same argument can be made every bit as strongly for intentionality.

When these arguments are applied to intentionality, however, they can be much harder to intuitively ‘see’ and take hold of at first as a result of the fact that anything that an intentional state does could hypothetically be performed by a robot which does it because it is mechanistically programmed and not because of its possessing any capacity for intrinsic intentionality—and it is easy to think about the function performed by an intentional state without thinking about the intentionality of that state by imagining that function taking place through a causal process simulating the original process while lacking its own intentionality, explain the function performed in causal–physical terms, and then simply fail to see that the intentionality of the actual original phenomena itself has not thereby been accounted for. Yet, as we will see shortly, the only reason we can even imagine a physical process duplicating a procedure which simulates intentionality to begin with at all is because of our own intrinsic intentionality which we can project (itself an intentionalistic process) into physical processes in which it does not actually reside apart from our own irreducibly conscious and intentionalistic projections. 

One of the most famous thought experiments in philosophy of mind is the Chinese Room argument from John Searle. Of the argument, David L. Anderson writes, “It is probably safe to say that no argument in the philosophy of mind (or in any area dealing with the nature of thought and cognition) has generated the level of anger and the vitriolic attacks that the Chinese Room argument has.” Stevan Harnard, editor of the journal the argument was first published in (in 1980), informs us that “The overwhelming majority still think that the Chinese Room Argument is dead wrong.” I am more than convinced that the overwhelming majority simply miss the point that is being made by the argument and are responding in fear resulting from the false impression of something being proven (about the impossibility of the success of a certain kind of artificial intelligence program) that the argument was never trying to show in the first place.[3] In my estimation, the thought experiment is simply doing no more than clarify the conceptual distinction between intentionality and function. In the words of Searle himself, the point is this: “syntax is insufficient for [because it is not identical with] semantics.” In other words, merely reordering a number of symbols according to a set of rules—performing a function—is not identical with consciously understanding the meaning of those symbols (or performing the function of manipulating those symbols because of a conscious understanding of their conceptual meaning and of the logical relationships between their meanings).

The thought experiment goes as so: “Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to [look and sound like someone who understands] Chinese, but he does not understand a word of Chinese.”

I think most ordinary readers, on seeing this argument, will immediately “get” the point the thought experiment is illustrating and agree that a physical ability to manipulate a set of symbols is simply not the same identical thing as a conscious ability to understand their meaning. The arrant confusion of the critics of the argument is most illuminatingly revealed by one particularly ludicrous counter–argument—which so happens to be the most popular: ‘the man might not understand Chinese, but the room as a whole does.’ As Ray Kurzweil puts it, “the man is acting only as . . . a small part of a system. While the man may not see it, the understanding is distributed across the entire pattern of . . . the billions of notes ( . . . ) .”[4]

Rather than actually addressing the point of conceptual distinction which the argument clarifies, these critics simply beg the question in a most obliviously vacuous way by defining “understanding” to mean something other than what Searle is getting at – as “the (mere) ability to manipulate symbols.”  But the very point the argument illustrates is that the physical ability to manipulate symbols just isn’t simplistically identical to the conscious ability to understand their meaning! What these critics fail to understand is that this is an argument about first–person consciousness (and the intentionality which goes along with it): again returning to the words of Searle, the point is that “Computational models of conscious [experience and intentionality] are not sufficient by themselves for conscious [experience and intentionality].”

No one makes the counter of suggesting that “the room” or “the system” actually understands Chinese because they actually think “the room” or “the system” is a subject of internal, concept–representational subjective experiences (which is just simply what Searle means here when he uses the word “understanding”). They do it because they simply flat–out ignore the distinction which Searle asks them to see and go on to define “understanding” to literally just mean the physical ability to manipulate symbols anyway, while ignoring intentionality entirely—and then they substitute this sense of the word “understanding” into Searle’s sentences without even allowing him to tell them what he actually means by his own choice of words himself. Thus, they derive a meaning from Searle’s point that is tautological, but is not at all what Searle meant.

And seeing why it is obviously implausible if not impossible that “the room” should have any subjective inner conscious representation of any meaningful concepts entailed by Chinese symbols should precisely help demonstrate why the very notion these critics appeal to in their defense that we are just such “rooms” composed of just such individually mechanical and non–representational parts as the pieces of paper held by the man in the Chinese room is implausible if not impossible for just the same reason. We’re back in Leibniz’ mill! In other words, the premise that this is how reality works would seem to entail the prediction that conceptual ‘understanding’ as we know it to exist isn’t possible, and to thus be refuted by the incontrovertible fact that it does. 

The capacity to grasp the meaning of symbols appears to be a basic property of consciousness itself (and a very clear example of intentionality—the capacity of experiential conscious states to somehow ‘reach out’ and direct themselves towards or reflect within themselves, intrinsically, things or ideas which lie beyond themselves). And the argument is more than successful to show that this capacity is simply not plainly identical to the physical ability to manipulate those symbols according to a rule–based pattern. However, the problem should now start to become clearer: as elaborated in the previous entry, physical mechanism is the only “ingredient” physicalist philosophy has to build any “pies” that exist from. How could intentionality, on a physicalist picture, ever even ‘get there’ at all?

 _______ ~.::[༒]::.~ _______

I’ve quoted the materialist Daniel Dennett and the “near–enough” physicalist Jaegwon Kim, previously, to support the point that I am not inventing the problem which subjective conscious experience poses for materialistic and physicalist accounts of the nature of the human mind on my own: the materialists and physicalists themselves lay out for us what the options are, and they choose their own bullets with regards to how to try to square the “two–dimensional” canvas of their physicalist premises with the “three–dimensional” conscious reality we all live, breathe, and know to try to avoid expanding that canvas to include any added dimensions. Here, I quote Alex Rosenberg, author of The Atheist’s Guide to Reality as an example of an outright eliminativist about intentionality—later, I’ll discuss Dennett’s attempts to account for (a reduced form of) it in roughly “emergent” terms.

“[S]everal of the most fundamental things that ordinary experience teaches us about ourselves are completely illusory. Some of these illusions are useful for creatures like us, or at least they have been selected for by the environmental filters that our ancestors passed through. But other illusions have just been carried along piggyback on the locally adaptive traits that conferred increased fitness on our ancestors in the Pleistocene. [ . . . ]

[ . . . ]

Suppose someone asks you, “What is the capital of France?” Into consciousness comes the thought that Paris is the capital of France. Consciousness tells you in no uncertain terms what the content of your thought is, what your thought is about. It’s about the statement that Paris is the capital of France. That’s the thought you are thinking. It just can’t be denied. You can’t be wrong about the content of your thought. You may be wrong about whether Paris is really the capital of France.

The French assembly could have moved the capital to Bordeaux this morning (they did it one morning in June 1940). You might even be wrong about whether you are thinking about Paris, confusing it momentarily with London. What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all. The brain can’t have thoughts about Paris, or about France, or about capitals, or about anything else for that matter. When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong.

Don’t misunderstand, no one denies that the brain receives, stores, and transmits information. But it can’t do these things in anything remotely like the way introspection tells us it does—by having thoughts about things. The way the brain deals with information is totally different from the way introspection tells us it does. Seeing why and understanding how the brain does the work that consciousness gets so wrong is the key to answering all the rest of the questions that keep us awake at night worrying over the mind, the self, the soul, the person.

We believe that Paris is the capital of France. So, somewhere in our brain is stored the proposition, the statement, the sentence, idea, notion, thought, or whatever, that Paris is the capital of France. It has to be inscribed, represented, recorded, registered, somehow encoded in neural connections, right? Somewhere in my brain there have to be dozens or hundreds or thousands or millions of neurons wired together to store the thought that Paris is the capital of France. Let’s call this wired-up network of neurons inside my head the “Paris neurons,” since they are about Paris, among other things. They are also about France, about being a capital city, and about the fact that Paris is the capital of France. But for simplicity’s sake let’s just focus on the fact that the thought is about Paris.

Now, here is the question we’ll try to answer: What makes the Paris neurons a set of neurons that is about Paris; what make them refer to Paris, to denote, name, point to, pick out Paris? To make it really clear what question is being asked here, let’s lay it out with mind-numbing explicitness: I am thinking about Paris right now, and I am in Sydney, Australia. So there are some neurons located at latitude 33.87 degrees south and longitude 151.21 degrees east (Sydney’s coordinates), and they are about a city on the other side of the globe, located at latitude 48.50 degrees north and 2.20 degrees east (Paris’s coordinates).

Let’s put it even more plainly: Here in Sydney there is a chunk or a clump of organic matter—a bit of wet stuff, gray porridge, brain cells, neurons wired together inside my skull. And there is another much bigger chunk of stuff 10,533 miles, or 16,951 kilometers, away from the first chunk of matter. This second chunk of stuff includes the Eiffel Tower, the Arc de Triomphe, Notre Dame, the Louvre Museum, and all the streets, parks, buildings, sewers, and metros around them. The first clump of matter, the bit of wet stuff in my brain, the Paris neurons, is about the second chunk of matter, the much greater quantity of diverse kinds of stuff that make up Paris. How can the first clump—the Paris neurons in my brain—be about, denote, refer to, name, represent, or otherwise point to the second clump—the agglomeration of Paris? A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

[ . . . ] What is going on [ . . . ] is just input/output wiring [ . . . ]. The brain does everything without thinking about anything at all. And in case you still had any doubts there is Watson, the Jeopardy playing computer, storing as much information as we do, without any original intentionality.”

What Rosenberg suggests, in other words, is that (in order to coherently hold on to the premises of physicalism, one must believe that) it turns out that all of us actually are the man inside of Searle’s Chinese Room: we may think we understand Chinese, but science shows that all we are is neurons—and neurons, like all physical objects as modern physics understands them generally, can only execute blind procedures by the inert force of causality; they don’t do things for “reasons.” And no physical entity moving through space “represents” any other physical entity. So whatever it is you may “think” you are, “science” proves that you’re the man in the room who doesn’t understand “the meaning” of Chinese after all.

The most basic problem that arguments like these leave out is this: consciousness itself is a subjective phenomena; subjectivity is precisely the form in which it has its existence.  It existsas a subjective phenomena. While our experiences can in certain kinds of cases mislead us about what is happening in the external world, these cases involve a distinction between an appearance within subjective experience and the nature of the underlying reality beyond range of our perception which contributes that appearance into our awareness: obviously and undeniably, when it comes to understanding the external world, appearances can be misleading. However, when we deal with most of the internal phenomena of consciousness as such, it is the very existence of appearances, in the first place, which we are concerned with: “the appearance” very literally just is the reality we want to explain, because the fact of the existence of the phenomena of consciousness is, in many ways, the fact of the existence of the phenomena that things ever “appear” to anyone at all. Hence, when dealing with the properties of consciousness proper, no distinction between “appearance” and “reality” can even be made, and the very concept of “illusion” is therefore rendered meaningless. 

All that it means to have had a conscious experience is just literally to “seem” to have had a conscious experience. “Well, it just seems like you have seemings” is not an answer to any question about the nature of  “seemings” themselves. The seemings themselves really exist; that part simply can’t coherently be denied. All other examples of “illusions,” outside the realm of consciousness itself (such as the fact that a stick placed in water will seem to be bent), precisely take consciousness itself for granted because within consciousness itself is where the illusion resides. Consciousness simply can’t even in principle be “an illusion” in anything like the same way. And what goes for conscious experience also goes for intentionality: to “seem” to have a concept in your conscious awareness and “understand” the “meaning” of what it refers to and what it is that it is “about” is just—literally—what it means to have a concept whose referentiality and meaning you understand. Calling that an “illusion” is, in fact, just literally meaningless. If Rosenberg were right, we wouldn’t be capable of reading and understanding the ‘meaning’ of the words he wrote; and he wouldn’t have been capable of writing them to express the idea in the first place. We would never have been able to grasp the conceptual distinction between understanding the (semantic) ‘meaning’ of words and ‘understanding’ how to manipulate them procedurally (syntax). And yet, anyone understanding the concepts that these intrinsically meaningless patterns of lines somehow refer to has all the evidence they could possibly ever need that—in fact—we do. 

But Rosenberg’s claim is false for an even more interesting reason, and illustrating the falsity will get us even closer to an intuitive picture of the heart of what intentionality is and the role that it plays in conscious life (and, via consciousness, the world as a whole). The suggestion that “information” in the purely physical sense we see exhibited in computers could replace the role of intentionality and thoughts “about” ideas with intrinsic propositional content in understanding the nature of thought trades off a naïve intuition that all of us probably share about how it is, and what it means to say, that computers even “compute” to begin with. What turns out to be the case is that computers, strictly speaking, are not in fact “computing” at all—rather, it is solely because of our capacity for intentional acts of understanding and representing concepts that we can use the physical processes of a “computer” as placeholder for our own intentionalistic ability to grasp and express and understand both logic and “meaning.”

The Chinese Room thought experiment previously illustrated the point that, in Searle’s words, “syntax is insufficient for semantics.” In other words, the physical manipulation of symbols by procedural rules is insufficient for conscious understanding of any meaning that may be referred to by those symbols. Searle eventually went on to realize, however, that this thought experiment actually smuggled a form of intentionality into the premises in its own way, and thus failed to show how deep the problem with intentionality actually does go. To quote the words of Searle: “Computation does not name an intrinsic feature of reality but is observer-relative and this is because computation is defined in terms of symbol manipulation, but the notion of a `symbol’ is not a notion of physics or chemistry. Something is a symbol only if it is used, treated or regarded as a symbol. The Chinese room argument showed that semantics is not intrinsic to syntax. But ( . . . ) syntax is not intrinsic to physics. There are no purely physical properties that zeros and ones or symbols in general have that determine that they are symbols. Something is a symbol only relative to some observer, user or agent who assigns a symbolic interpretation to it. So the question, ‘Is consciousness a computer program?’ lacks a clear sense. If it asks, ‘Can you assign a computational interpretation to the processes which are characteristic of consciousness?’ the answer is: ‘you can assign a computational interpretation to anything.’ But if the question asks, ‘Is consciousness intrinsically computational?’ the answer is: nothing is intrinsically computational. Computation exists only relative to some agent or observer who imposes a computational interpretation on some phenomenon. This is an obvious point. I should have seen it ten years ago but I did not.”

To see this even more clearly, we’ll take the simple example of a calculator. Even if we don’t think the calculator ‘understands’ any concept of the meaning of ‘2 + 2’ when it calculates, we still ordinarily assume that it is, on some physical level, actually at least in some sense “calculating” ‘2 + 2.’ But this assumption is false—and what goes for calculators goes for absolutely every other form of physical calculation besides. To bolster the claim that intentionality can be dispensed with because “Watson, the Jeopardy playing computer, stor[es] as much information as we do, without any original intentionality” turns out to be meaningless, because the physical patterns that Watson actually stores turn out not to be actual “information”—not even in an unconscious sense—at all.

To see this, we’re going to have to scrape the symbols off of the buttons of the calculator, leaving all of them blank. Physically, after all, the shape ‘2’ is just literally nothing whatsoever other than a series of spilled dots of ink. And per Rosenberg, “How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light–years away?”  The problem for the physical clumps of stuff that we use to represent concepts like numbers goes even deeper: the concept we use the symbol ‘2’ to refer to is not even a physical entity present anywhere as such, and yet it could be described as existing nearly anywhere. How could any physical shape created by dropped bits of stuff landing in certain places on a surface possibly be ‘about’ such an abstract concept as that? The physical pattern of the symbol is only said to “represent” that because we use itfor the purpose of representing it. So after removing the symbols off the buttons on the calculator, let’s alter the programming of the calculator to ensure that each shape that was previously programmed to be displayed on the calculator’s screen becomes a random new shape instead.

Now, when something presses a few of the blank buttons on the surface of the calculator, and a shape resembling something Japanese appears on the screen, what is the calculator—as a physical entity—doing? Is it calculating? The answer is a surprising, but now obvious “no.” The “calculator” goes through a series of electrical state transitions, which process is completely determined by the physical properties of electricity—and that’s it. Everything that is happening within this system is determined purely by the laws of physics—by the physical properties of electricity, which do not coincide even with any procedural laws of syntax. The reason this physical object is able to become a “calculator” is solely because we, as conscious observers possessing intentionality, can see that these physical properties would make it a convenient tool to use for this purpose[12]—and we in turn apply the symbols in the correct way not only to allow us to calculate the meaning of the concepts entailed by the pattern ‘2 + 2’ which the calculator as a physical object has no awareness of, but even to cause it to happen to be the case that the pattern of inputs and outputs will now coincide with any sort of logical rules about syntactical manipulation!

Suppose instead of scraping the physical patterns of dropped ink off the surface of the calculator’s buttons, we switch the ‘+’ symbol with the ‘3,’ the ‘÷’ symbol with the ‘9,’ and the ‘2’ symbol with the ‘4.’ Now, an input of “2 + 2” will cause the screen to display physical shapes in a pattern that looks like “434,” and an input of “43÷” will cause the screen to display the shape “11,” and an input of “93” will display “ERR.” The “calculator,” as a physical object, isn’t doing a single thing differently than before.  But now, simply by altering a few of the physical shapes on its surface, it isn’t even following rules of syntax at all. And what this should cause us to realize is that that is just because it never was following such rules in the first place. We understand rules of syntax. We observe the causal inputs and outputs of the physical states of a given way of wiring up the calculator. We devise a way to apply the symbols to the surface of the buttons of calculator in such a way that the “calculator” will only be “following syntactical procedures,” nevermind calculating the concept “2 + 2” to derive the concept of “4,” purely incidentallyand not as a result of any physical property of the system.

The implication of this is that unless we can accept that we are just required to posit intentionality itself as a fundamental category of the world in its own right [10] , analogies to try to explain human consciousness through analogies with “computers” are useless—because the very concept of computation itself presupposes the existence of a conscious observer with the capacity for intentionality in the first place, and simply does not exist as a physical phenomena in the world without one—just precisely like the “sound” the falling tree never makes in the forest if there isn’t a conscious subject of private, subjective experiences around whose mind can translate the physical vibrations of molecules resulting from the collapse of the tree into sound. So it is, likewise, with the forms of “computation” which are “performed” by “computers.”

You might say that when it comes to the relationship between consciousness and computation, it’s the thought that counts. 

Unfathomably useful tools though they may be, they can only be said to process “information” of any kind whatsoevereven unconsciously—because we are around to attribute any kind of “meaning” whatsoever (even that of its being a “symbol” that has any place in any logical system of syntax or grammar) to the intrinsically meaningless physical patterns that physical causality allows us to display on their outputs, and which we apply to their inputs (whether in the form of specific conceptual content, or even solely the basic fact that that pattern even represents a “symbol” of any kind that must be manipulated according to logical rules of any sort at all.) [6] (All of these points go for verbal language, as well.[7]) To quote William Hasker in Metaphysics: Constructing a World View, “Computers function as they do because they have been constructed by human beings endowed with rational insight. And the results of their computations are accepted because they are evaluated by rational human beings as conforming to rational norms. A computer, in other words, is merely an extension of the rationality of its designers and users; it is no more an independent source of rational thought than a television set is an independent source of news and entertainment.” 

But this goes not merely for “rationality” in the sense that requires “understanding” of the concepts referred to by symbols, it goes every bit as much as well for even the syntactical manipulation of the symbols themselves, because even symbol manipulation exists only as a concept which minds impute over the physical world—and not as a physical state of the world itself. What, physically, does “information” consist of in the first place? The very idea of “information” is an intrinsically intentionalistic concept that can only be made sense of in terms of a conscious mind who is “in–formed” by whatever process is in question—who is able to have intentionalistic concepts “formed” “in” his mind through understanding the relationship between one sequence of facts and another.

The shadows cast by a sundial allow me to determine the time of day, but the shadow itself does not represent any actual kind of “information” except in the most trivial way that it happens to be the kind of thing that a mind could conveniently use as a marker for determining some other physical state. While the relationship between the shadow and the sun which the shadow allows us to gather “information about” is, superficially, a causal and physical one, the key point is that every so–called “informational” state is going to have a diffferent causal description—the physical relationship between time and the rings that accumulate within tree trunks over time is going to be completely different from the physical relationship between the time of day and the shadows cast over a sundial as a result of the movement of the sun, and so practically by definition there is simply no way to physically specify what it is that these examples hold in common. The only common feature is that minds can come to understand (e.g., with intentionality) that relationship, whatever its particular details may be.

So the concepts of physical “information” or “information processing” hold zero hope for explaining anything about the conscious human exercise of intelligence as we know it, because at the physical level of reality, none of these even exist; and the sole sense in which they do exist is one that can only be made sense of through presupposing the existence of conscious minds with the capacity to “understand” and think in terms of concepts in the first place; calculators can only be said to “calculate” at all because conscious minds understand not only the concepts referred to by symbols such as “2,” “+,” and “4,” but even rules of syntactical symbol manipulation in such a way as to allow us to slap these physically meaningless patterns onto the right inputs and outputs of the calculator so that, in following its literally meaningless series of physical state transitions, “rules of mathematics” will be happen to be “followed” by the physical operations of the calculator completely by coincidence and not as a result of anything it is doing as a physical object.

This can be seen most clearly by realizing, again, that switching a few of the symbols around on the calculator’s surface will immediately cause it to be no longer even following rules of syntax, despite nothing about the internal physical processes of the calculator changing whatsoever. The only thing that makes a “calculator” physically useful is that its physical properties allow given inputs and outputs to be consistently linked, and we can see that this would make it convenient to use for this purpose. But then, for any “calculation” in any sense whatsoever to take place at all requires us mapping symbols (that only minds possessing intentionality even know are “symbols,” because the very concept of “symbol” refers not to any physical fact, but to how those facts are interpreted and used by minds) to those inputs and outputs in such a way that logical rules of any sort are being “followed” by the calculator at all. And what goes for calculators goes for any other form of physical “computation” whatsoever: it exists in the only sense in which it has “existence” solely through the intentionalistic interpretations of a conscious mind.

_______ ~.::[༒]::.~ _______

Previously, we took an argument from Daniel Dennett, who said (paraphrasing) “If [materialism—reality at its root is all and only blind and mindless causal process], and [the emergence of subjective experience from unconscious physical process doesn’t make sense], then [subjective experience can’t exist]—and [materialism is true], therefore [subjective experience does not exist]” and we kept the skeleton of the logic (If P, then Q. P, therefore Q), but established that since subjective experience can’t be dispensed with, if emergentism can’t work then it is the materialist premise that has to go (If P, then Q. Not–Q, therefore Not–P), so we should do the same thing here. Rosenberg, in essence, tells us: “if [materialism—reality at its root is all and only blind and mindless causal process], and [emergentism of intentionality from unintentional but only causal process can’t work], then [intentionality can’t exist].” Well, the intentionality that we know as an intrinsic part of our experience can’t be eliminated any more than experience itself can. Where the eliminative materialists think they’re producing modus ponens arguments justifying conclusions that eliminate ordinarily understood aspects from human consciousness and self, they’re actually creating perfect modus tollens arguments against the truth of their own starting premises. Rosenberg concludes “P [materialism is true], therefore Q [intentionality cannot exist].” What we are actually required by dictates of truth to conclude, if the skeleton of this logic is correct, is “Not–Q [intentionality exists], therefore Not–P [materialism is false.]”

Next, we’ll begin to look at an attempt to explain intentionality as something which “emerges” from processes which don’t of themselves possess intentionality, to highlight the inescapable flaws that any such account in principle will run into. But first, I want to detour through a discussion of the general point which that example will serve to support.

It should have become relatively clear in light of the arguments explained in the previous entry that for all the verbal and linguistic posturing, there really aren’t, in the end, multiple ways to be a materialist after all—all the different ways of trying to categorize it really do, in the end, just end up amounting to the same ultimate claim. The eliminative materialist says, “X doesn’t exist. Only Y’s exist.” In contrast to that, what the “emergent” materialist thinks he can get away with saying in order to retain his materialism while avoiding eliminativism is: “Sure, X exists, but X [which looks one way, and appears to have a certain set of x–traits] is really a Y [which looks totally different, and actually only has y–traits].”

My conclusion, to state it frankly, is that this is genuinely little more than an outright expression of cognitive dissonance in response to (1) wanting desperately to hold on to the materialist or physicalist premises, and (2) realizing in eliminativism how absurd their very clear consequences actually are. To help see this, let’s replace these claims with examples. The eliminativist says: “Flour doesn’t exist. Only sugar exists.” The emergentist however, realizing that this is plainly absurd, tries to say hold on to reductionism without eliminativism by saying: “Sure, “flour” exists, but “flour” is really sugar.”

How is this claim actually any different? Despite the superficial brush of appearances, it isn’t! The truth is that the reality that both of these statements describe turns out to be exactly the same: sugar is the only thing that actually exists. The only thing we actually have here is a minor dispute about language—about the most appropriate way to speak about the situation where sugar is the only thing that exists—about how to use the word “flour” now that we believe that the concept we previously used it to refer to doesn’t exist).  But in either scenario, sugar is quite plain and simply the only ingredient that we’re postulating exists, and if it turns out that consciousness has properties (and in fact, even merely the “appearance” of properties) of a “chicken pot pie” [see entry (III)], then this should rightly cast extreme doubt on (if not count as an absolute disproof of) the claim that the “sugar” of blind, causally brute physical forces are the only ingredients stocked in reality’s pantry.

To spell this out with another substitution, where the eliminativist says,  “Unicorns don’t exist. Only horses exist.” The emergentist tries to say,  “Sure, “unicorns” exist, but “unicorns” are really only horses.” It is abundantly clear now that both claims eliminate the substance of the concept referred to by the word “unicorn.” The two camps on this question would merely be in disagreement on what to do with the empty skeleton container of a word that is left over after the elimination. The emergentist thinks he can get by with holding on to his materialist premises while refusing to perform the absurd eliminations that the eliminativist explains follow from materialism (e.g., that nobody ever has any conscious experiences, or thinks thoughts “about” anything whatsoever), and he tries to create the optical illusion that this verbal mirage somehow changes something about the substance of the situation, when—in reality—it doesn’t at all, and thus there is simply no way in principle this can alter any ultimate analysis of where the reality of the situation stands. And this is why all accounts of “emergence” are just inevitably going to turn out to be inherently self–contradictory—the only work that remains is to perform various levels of tedious technical work to expose the contradictions in new varieties of attempts, depending on how technical and tedious a particular effort to conceal them is.[8]

Because it is for systematic reasons that the contradictions will inevitably appear; it simply turns out, in the end, that the core problem at the central root of the basic physicalist picture of the world is the lack of any logical identity between subjective conscious experience, and blind and unconscious physical ingredients; or between the phenomena of intentionality, and unintentional causal process [5]. At some point, we should begin treating such claims as the philosophical equivalent of an engineering student claiming to have discovered how to create a perpetual motion machine: it isn’t just “something we’ve not yet proven possible.” It’s something we’ve discovered is impossible because we’ve discovered the systematic reasons which prevent it from being possible even in principle. You can’t create a machine that produces work without the input of energy, and you can’t derive conscious experience from purely unconscious processes or intentionality from purely causal processes. The eliminativist sees this clearly, and actively decides to bite the bullet and deny that we “understand” “ideas” or “concepts” or otherwise have any thoughts which are “about” (or make reference to) anything at all or otherwise even have any kind of subjective experiences. Yet every physicalist who tries to reject eliminativism (as rightly they should) is attempting an inevitably doomed project—trying to have their cake and eat it too. And the only answer is simply to abandon the physicalist premise itself and admit conscious experience and intentionality themselves as fundamental, “bedrock” aspects of the nature of reality itself all in their own right.

_______ ~.::[༒]::.~ _______

As a case example in attempts to argue for “reductionism” about intentionality, we’ll return to the case of Daniel Dennett (who in this case, intriguingly enough, turns out to be making at least a shallow attempt to avoid eliminativism about intentionality). In the paper Evolution, Error, and Intentionality, Dennet tries to make the argument that we can ‘build our way up’ to the intentionality of human consciousness through gradated steps beginning from the “proto–intentionality” of evolution.

Throughout, his account is only able to appear to get anywhere at all off of the ground because he projects intentionality into places where it only “exists” as a result of human intentionality “placing” it there. In the process he must, as always, try to have the cake of conscious experience and intentionality (whether he wants to explicitly admit it or not, the premises will have to slip it back in somewhere in order not to devolve into even more blatantly concentrated absurdity than is currently at least dispersed throughout a more subtle series of equivocations) and eat it through reductionist “explanations,” too. In one place, he summarizes his position as so: “[W]e [have] no guaranteed privileged access to the deeper facts that fix the meanings of our thoughts, [because] there are no such deeper facts [and therefore—placing Dennett precisely in agreement with Rosenberg—no real meanings of our thoughts at all].”

Yet, in another, he begins: “A shopping list in the head has no more intrinsic intentionality [e.g., actual meaning] than a shopping list on a piece of paper.” But as he goes on, notice the attempt to point back in the refrigerator at the cake he suddenly pretends not to have just pulled out and eaten: “What the items on the list mean (if anything) is fixed by the role they play in the larger scheme of purposes [there’s intentionality popping its head in, again!]. We may call our own intentionality real, but we must recognize that it is derived from the intentionality of natural selection, which is just as real—but just less easily discerned because of the vast difference in time scale and size. So if there is to be any original intentionality—original just in the sense of being derived from no other, ulterior source—the intentionality of natural selection deserves the honor. What is particularly satisfying about this is that we end the threatened regress of derivation with something of the right metaphysical sort: a blind and unrepresenting source [my emphasis] of our own sightful and insightful powers of representation. [ . . . ] While it can never be stressed enough that natural selection operates with no foresight and no purpose, we should not lose sight of the fact that the process of natural selection has proven itself to be exquisitely sensitive to rationales, making myriads of discriminating “choices,” and “recognizing” and “appreciating” many subtle relationships. To put it even more provocatively, when natural selection selects, it can “choose” a particular design for one reason rather than another ( . . . ).

His argument is that what intentionality really consists of is just the ability to hold what he calls “the intentional stance” towards an object or pattern, in which we interpret phenomena “as if” they do things because of ‘beliefs,’ ‘reasons, ‘desires’ and so on—and this is justified as a kind of fiction, in Dennett’s view, by its empirical success in making accurate predictions: “Certainly we can describe all processes of natural selection without appeal to such intentional language, but at enormous cost of cumbersomeness, lack of generality, and unwanted detail. We would miss the pattern that was there, the pattern that permits prediction . . . ” What this suggestions gets backwards is that the very notion of “holding an interpretive stance” towards something itself presupposes intentionality, whether we’re talking about the ‘intentional stance,’ the ‘physical stance,’ or any other: even if we try to speak, as Dennett does, of “Mother Nature” as a fiction, the development of this fiction as a representational concept and the language used to represent and communicate it to other intentional agents is intentionality. The account simply presupposes exactly what it is meant to try to explain—as any attempt must.

This is similar in ways I will explore to Dennett’s redefining of the very phenomena of subjective experience itself as nothing other than “a logical construct out of peoples’ judgments that they are having [experiences] . . . [where] such judgings [are] constitutive acts which, in effect, bring the [so–called] [experience] itself into existence” in the paper, Quining Qualia, discussed in the last entry: if this definition of conscious experience seems plausible to anyone, it can only be because they are taking for granted subliminally that people communicate judgments about their experiences because they are having experiences. There is a relationship between the two, but it is most definitely not that the judgment creates the “experience” (where “experience” is just defined as statements about experience[11]).

In Quining Qualia, Dennett proposed this ludicrous suggestion as a “solution” to the “problem” of two people, Chase and Sanborn, who begin to dislike the coffee they’ve been drinking every morning for years, one of whom thinks the coffee itself has grown worse over time, the other of whom thinks his own tastes have simply changed. What Dennett suggests this thought experiment proves is that one of them has to be mistaken about what they are experiencing—therefore, he reasons, there are no such thing as subjective experiences which we know so intimately that experiment cannot refute them, and therefore we are best off concluding that no such thing as experience exists at all, and all that actually needs to be explained is the fact that people ‘talk’ about[11] experiences.

So he concludes, “the infallibilist line on qualia treats them as properties of one’s experience one cannot in principle misdiscover, and this is a mysterious doctrine (at least as mysterious as papal infallibility) unless we . . .  treat qualia as logical constructs [of ] judgments . . .  Yet, what Dennett obviously has utterly misunderstood right from the very beginning of his discussion is that the experience his coffee drinkers are having is that they no longer enjoy the coffee. And his example does nothing to undermine the fact that they cannot in principle be wrong about this. What Dennett has gotten confused is an unbelievably basic distinction between an experience and the subject of that experience’s further rational inferences about why[11] they are having it.  

Either of the coffee drinkers might be—indeed, one will have to be—mistaken about what fact it is that explains why they are having the experience they are having. But that poses no challenge whatsoever to the idea that they cannot be mistaken about the experience of no longer enjoying the coffee, itself. Indeed, we all expect that if Chase and Sanborn devise the empirical tests that Dennett mentions to test and refute their competing hypotheses about whether the coffee has changed, their tastes have changed, or something else, they will find one of these answers—precisely because something will have to provide the explanation for why they are having the experience of no longer enjoying the coffee—because the thing they cannot even in principle be mistaken about is that. 

And so he weaves a similar tale here regarding the intentionality of conceptual thought and language: “Consider then the members of a Putnamian tribe who have a word, “glug,” let us say, for the invisible, explosive gas they encounter in their marshes now and then. When we confront them with some acetylene, and they call it glug, are they making a mistake or not? All the gaseous hydrocarbon they have ever heretofore encountered, we can suppose, was methane . . .  Of course once we educate them, they will have to come to mean one thing or the other by “glug,” but in advance of these rather sweeping changes in their cognitive states, will there already be a fact about whether they believe the proposition that there is methane present or the proposition that there is gaseous hydrocarbon present when they express themselves by saying “Glug!”?”

With yet another “problem” in hand, Dennett is ready with yet another “solution:” just as before, we simply have to do away with the assumption that their statement actually refers to any concept at all: “It is not just that I can’t tell, and they can’t tell; [it’s that] there is nothing to tell.” Now, as such examples often do, this one tries to get us to eliminate something we previously thought we knew about first–person consciousness by cutting our internal first–person awareness of the first–person phenomena of consciousness out of the equation from the start, by having us look at the behavior of a third party from the outside, in third–person, where by definition that person’s external behavior (and not any internal experiences which might drive or explain them) is all that we’re able to consider for the sake of the example to begin with. So, let’s re–situate the analogy back into the first–person perspective again and see what happens.

Suppose you were born a male in an all–male society, where you’ve been taught to refer to everyone you see as a “man.” With this scenario clearly in mind, suppose a female missionary visits your tribe—and when you refer to her as a “man,” she corrects you and tells you that she is, actually, a “woman.”  Now, it is clear that one of two or three scenarios have to take place. In the first, you will respond with something roughly like: “No, you have a face, two legs, and two arms, you walk and talk—clearly you are a man” to which she might respond, “no, look: I don’t have a penis.” To which you would be most naturally inclined to say something like: “Wow. You’re a strange kind of man. I’ve never known a man who didn’t have a penis before” to which she would respond: “that’s not how we use the word ‘man.’ You use the word ‘man’ like we use the word ‘person.’ But when we use the word ‘man,’ we mean to refer (roughly) to someone with the traits which allow them to play the ‘male’ role in reproduction, in distinction from ‘women.’” To which you will respond: “I see. The word ‘man’ means something different to you than it means to us. I’ll change the way I communicate to reflect that [or else, you can change yours to reflect what I have now told you that mean].”

Otherwise, in the second scenario, you will respond with something like: “Bullshit. Let me see your penis” to which, let us assume, she does so—at which point you will say: “Oh, my God. That isn’t a penis. You really aren’t a man after all!” Alternatively, for the third, you might not have consciously meant anything like either of these. You might realize that the word “man” is just something you have always said out of sheer habit during conversation all along, wiithout intending to refer to any clear concept by it one way or another. But in either case, you were absolutely using the word in a specific way; even if, in a scenario like the third, that specific way was not “to refer to a specific concept.” What Dennett turns out to really be saying here is that we must do away with the notion that statements mean anything or are used in any determinate ways by people ‘on the inside’ at all, simply because these meanings can’t be determined conclusively beforehand ‘from the outside:’ “there is no ground to be discovered in their past behavior or current[ly observable] dispositions [emphasis mine] that would license a description of their glug-state as methane–detection rather than the more inclusive gaseous–hydrocarbon–detection.”

To return to our own example, it is true that there is nothing in your past behavior (or currently observable dispositions) that makes it clear according to any externally quantifiable third–person set of criteria whether you are using the word “man” to mean “person” or “biological male,” prior to the interaction that is bound to ensue after the woman denies that she is a “man.” Thus, an outside observer (such as Dennett has us take the role of for his version of the thought experiment) isn’t capable of knowing what you meant by the word prior to that interaction. However, just because it is true that, prior to the interaction, your disposition was not currently observable, it does not follow that you did not at that point have a current disposition. In fact, it is precisely during that ensuing interaction that the disposition which in fact existed previously will become observable, and the existence of that disposition is precisely why the interaction will go one way or the other. And that disposition is represented by what your intention was in using the word. 

There is no justification in this supposed “problem” for doing away with the notion that people have ideas in mind and intentions any more than there was justification in the supposed “problem” in the example of coffee drinkers in Quining Qualia to do away with the notion that people do have subjective first–person experiences which they alone have first–person access to and which, as such, they cannot be mistaken about (even if they can draw mistaken inferences about why they are happening). If externally quantifiable criteria can’t suffice to account for intentionality, the only thing this actually proves is that intentionality is something distinct from what can be externally quantified.

However, the primary data, which we know immediately from directly observing the intrinsic contents of our own first–person experiences first–hand, that our conscious thought operates by picking out and representing inherently intentionalistic concepts, is a fact that Dennett must try to find some means of denying in order to try to dilute “intentionality” into something that the only tools which any evolutionary approach to explaining it can even attempt to use, since he recognizes that these tools would be incapable of accounting for it otherwise: after discussing the example of a machine designed for the purpose of detecting U.S. quarters which eventually ends up by mistake in Panama, where it turns out to be equally capable of detecting the Panamanian quarter–balboa, he asks whether the machine is ‘mistakenly’ identifying quarters all the while, or rather at some point begins to ‘accurately’ identify quarter–balboas. Suddenly ignoring the fact, which he previously recognized [9] that this simply depends on the conscious intentions of the people who are using the machine for either one or the other purpose at any given point in time, he concludes that “[since] the two–bitser is just an artifact[, i]t has no intrinsic, original intentionality, so there is no “deeper” fact of the matter we might try to uncover. This is just a pragmatic matter of how best to talk, when talking metaphorically.”

But since “We are artifacts [of natural selection]. . . survival machines for genes that cannot act . . . in their own interests,” it follows that “the same pragmatic rules of interpretation [must apply] to the human case.” And “if we are such artifacts, not only have we no guaranteed privileged access to the deeper facts that fix the meanings of our thoughts, but there are no such deeper facts [and therefore, it would follow, no meanings to our thoughts at all!]” But Dennett, in extending this analogy, misses the very point (which again he previously conceded to[9]) that the only fact that determines which state the machine is “in” is precisely defined by the deliberate purposes for which human beings are using it in the first place. 

The argument gets everything precisely backwards, by assuming the absoluteness of precisely the very unproven physicalist premises that the observed aspects of consciousness cast under question, drawing conclusions that eliminate plainly, directly observed data about how consciousness operates (e.g., by understanding ideas through having thoughts which are intrinsically “about” the concepts to which they refer), and then blithely proceeding to eliminate those fundamental aspects of thought, consciousness, experience, and the self in sacrifice to the premises with hardly a moment’s pause—“A shopping list in the head has no more intrinsic intentionality [e.g., actual meaning] than a shopping list on a piece of paper.” Yet it does this, no less, while projecting things we can only describe as possessing “intentionality” because human beings project their intentionality into them when using them for conscious purposes—to try to “explain” human intentionality and purpose.

What Dennett has created here turns out to be an argument whose conclusion refutes its own premises. There is no  “deeper fact” about whether the quarter–detector in Panama is “really” detecting quarters or quarter–balboas—because this depends on the purpose to which it is being intentionally adopted by intention–driven, purpose–adopting human minds[9]. Therefore, since human beings are artifacts as well (resulting from a process that doesn’t even craft them in the ways that it does for literal “reasons”—a fact that Dennett equivocates around by using the metaphorical language that the mindless process of evolution “selects . . . [a] design . . . [for] reason[s]” rather than simply describing this in the more accurate literal language that traits end up proliferating as a result of given causes), human beings do not have intentional concepts or purposes, either. But then it would follow that no one was ever adopting the quarter–detector for either the purpose of detecting quarters or the purpose of detecting quarter–balboas, either—a claim that even Dennett himself seems not to believe is true! And we circle right back around yet again to the very phenomena that Dennett is pretending that any of this is somehow explaining.

Thus, for Dennett, conscious experience itself just is literally nothing more than the fact that people talk “about” their experiences—nevermind that they do so because there are experiences for them to talk about. And intentionality itself just is literally nothing more than the fact that people can “hold intentional stances” towards things and interpret them in ways that help them predict the future—nevermind that they are able to do this in the first place only precisely because of the very capacity of human minds to interpret and represent meaning, which is exactly the phenomena of intentionality Dennett somehow thinks he’s reducing. But notice that if Dennett is right about intentionality, then it follows that he pulls the rug out from under even his own reduction of conscious experience to statements “about” fictional conscious experiences, because even the appearance that these statements are “about” experiences or anything else is merely a fiction, too! As is the notion that any of this even “appears” to be any sort of way at all—for there are no appearances; you simply speak about the world as if there are. But—again—you don’t ever actually speak or think determinately “about” anything, either. Yet it can’t even be quite right to describe any of this as a “fiction,” because fictions are representational narratives, and the brain—being a physical object—doesn’t operate through semantic, propositional narratives or through “representing” anything. Despite the illusion that it does—which doesn’t exist either, because all that actually exists are your statements “about” experiences. But not only do the experiences not exist, the statements aren’t even “about” anything. How is it even possible to get things this wrong?!

The ultimate motivation for all this absurdity is, of course, that “either you must abandon meaning rationalism—the idea that you . . . not only hav[e] access, but . . . privileged access to your meanings—or you must abandon the naturalism that insists that you are, after all, just a [blind physical] product of [blind, physical] natural selection.” And Dennett sides with abandoning the idea that we ever know what we “mean” by anything (while delivering this in a paper which tells us that he means this). But we have far more overwhelming evidence for the fact that thought expresses meanings which our conscious minds are capable of understanding than we do for the claim that “[we] are, after all, just [blind physical] product[s] of [blind physical processes of] selection [acting on blind physical entities].”

The case of Dennett proves that the non–physicalist is not inventing these inherent problems and contradictions. Even those who are committed to physicalism end up seeing them—and no less than that, even when they see the rank absurdity entailed by those premises, many continue stubbornly defending these premises against every notion that makes conscious thought and experience comprehensible at all, and even well past the point that they can only continue to press the point through contradicting themselves repeatedly at the deepest and most fundamental levels.

The entire article containing Dennett’s argument, itself—representing nothing other than the use of language which represents thoughts that are “about” concepts and the abstract logical relationships between them—is itself fundamentally all and nothing other than an exercise which originates in the “objective,” “real” intentionality present in Daniel Dennett’s conscious mind the entire time. The physicalist is simply left with no option but to deny subjective experience from concepts he derived from his own conscious experience, and deny the conceptual and representational nature of thought through thoughts “about” the logical consequences of the concepts, expressed in representational symbols. [7] This is the most overwhelming refutation of physicalism there could possibly be.

_______ ~.::[༒]::.~ _______ 

[1] Because he recognizes explanations of material structure and inertly caused motion can’t in principle explain private, subjective experience (and explanations of form, structure and motion are by definition the only tool that the physicalist account has)—points which the previous entry, “The Case of the Lunatic Fish,” labored in detail to make, with reference to physicalists themselves expressing difficulty with it, while Jaegwon Kim—for example—openly defends the position that subjective experience just ‘dangles’ off the physical processes of the world awkwardly.

[2] Compare the Knowledge Argument, for example, to Lawrence Bonjour’s martian.

[3] Even if Searle is perfectly correct, the accuracy of his point that “computational models for consciousness stand to consciousness in the same way the computational model of anything stands to the phenomena being modeled. Nobody supposes that a computational model of rainstorms in London will leave us all wet” simply poses no necessary challenge to the possibility that we could eventually create a fully working model of human intelligence.

[4] Kurzweil follows this up with the elaboration that: “I understand English, but none of my neurons do. My understanding is represented in vast patterns of neurotransmitter strengths, synaptic clefts, and interneuronal connections. Searle appears not to understand the significance of ( . . . ) emergent properties.” This is the assumption from which the “systems reply” is supposed to proceed. But the hidden linking premise between “understand English” and “none of my neurons do” which is supposed to bridge this claim in defense of the systems reply is that “My understanding [in the specifically intentionalistic rather than functional sense] consists of nothing other than physical neurons which operate solely by causal function without any intrinsic capacity for intentionality.” Unlike the questions I admit to have technically had to beg at points throughout my discussion due to the nature of the cases they involved, Kurzweil begs the question in favor of a point that is a strong empirical claim about the world, and is not accessible through introspection by anyone’s claim (not even Kurzweil’s). Nor is it in any sense something which has been proven empirically.

Furthermore, it is exactly the assumption that the reasoning this argument presents actually provides reasons to reject. It should be fair to say that there are more and less offensive ways to ‘beg the question.’ Bluntly ignoring an argument that gives reasons to think a certain conclusion is false only to flatly reply that “those reasons are incorrect, because the conclusion is true” while giving no other reason to reject them (not even “because I know by immediate awareness that the conclusion is true, and I’m sorry that that leaves me in the inescapable position of having to technically beg the question as the only way I can try to ‘point towards’ what I know to be true”) is surely among the most offensive and least defensible, but it is what any critic who tries to take Kurzweil’s approach would in principle have no choice to resort to.

If a critic actually wanted to try to take this approach in anything so much as resembling a reasonable way, they would admit the unbearably obvious conceptual distinction between a physical ability to execute a procedure and a conscious ability to intentionalistically ‘understand’ which the thought experiment serves to draw out, and then suggest some way how—to our surprise—there is some way to bridge the gap from physical procedure to intentionality with the tools offered by physical procedure alone. The whole point of this entry, however, is to explain the systematic reasons why this is impossible and incoherent in principle. Kurzweil simply fails to demonstrate any concrete appreciation for any kind of actual understanding of the “significance of ( . . . ) emergent properties” of his own; he merely uses the term as a means of hand–waving—with arrogance.

[5] A particularly tedious technical attempt to get around this is to appeal to the Kripkean notion of “a posteriori necessity.” Anyone who thinks Kripke’s idea is relevant here has simply been, in the words of this article, “misled by language”: “[I]t is important not to confuse conceptual analysis with metaphysics. [A common interpretation] is that Kripke made a metaphysical discovery: that he discovered really interesting modal metaphysical facts (e.g. water is necessarily H2O) that we come to grasp through empirical discovery (i.e. water’s molecular structure).” Thus, the defendant of physicalism who appeals to this notion suggests that we can “discover” that the mind and the brain are identical empirically (a posteriori), even though they aren’t identical conceptually (a priori), in the same way Kripke is supposed to have discovered this for something like the identity of water and H2O. However, “ . . . this is not quite right. . . although Kripke discovered something philosophically interesting (two empirical facts plus the law of identity), he didn’t discover anything metaphysically interesting.”

This is why Kripke himself repudiated the idea that conscious states and physical brain states were or could be “identical” for variations on precisely the same reasons in essence which I have defended throughout this series. If there’s no conceptual identity, there’s no identity. And the notions of conscious experience and physical process, or of intentionality and causal procedure lack conceptual identity completely—a truth which simultaneously renders it true for all the same reasons that it is impossible in principle to “build” either of the former out of any combination of ingredients made up solely of the latter. The differences are not a matter of degree, but of category. The dimensions in which the phenomena of consciousness are measured represent fundamentally different categories from those which the physicalist defines “physical” phenomena as possessing. If the physicalist chooses to do this, then the physicalist has defined himself into a corner.

The moment we admit that consciousness truly does possess these properties (that is, the moment we reject eliminativism), physicalism has failed by definition—but these are the definitions provided by the physicalist himself, not those invented by critics. So this argument, despite being true “by definition,” does not “beg the question” except at the step at which the only way we can “point at” these properties of consciousness is by introspection—which in turn is true regardless of the quality or lack thereof of any publically verifiable argumentation that can be made because consciousness itself is a subjective phenomena—is the very existence of the phenomena of subjectivity itself—to begin with. If someone wants to side with Dennett and deny his own subjective experiences (or with Rosenberg and deny that he ever thinks “about” anything), while pretending he did not arrive at these conclusions by thinking “about” things he knew solely because he observed them within his own subjective stream of experience to begin with, I can no more formally argue against him without “begging the question” than I can do so against the solipsist. But the falsity of the central claim of physicalism as a whole ultimately turns out to follow from nothing more than this.

[6] Similar analogies with computers are sometimes made towards qualia: in defense of “emergence,” the arguer will point out that “the color blue” is not physically represented inside the processing unit, and yet it still manages to appear in the screen—and this is supposed to prove, by analogy, that “the color blue” could appear in consciousness without being physically represented inside the neurons. However, the only “color blue” that “appears in the computer screen” is just exactly the one that exists within and as an aspect of your subjective conscious experience in the first place. The “color blue” as such isn’t physically in the screen, either, any more than it’s in the CPU.

[7] Just as what happens inside of a calculator is not, in and of what it intrinsically is, actually any sort of physical act of “calculation” apart from the attributions of conceptual–representational “meaning” by convention to symbols which are laid on top of the physical processing as such by intentionalistic conscious observers themselves, so there is nothing about a word, as a physical object, that is intrinsically “about” anything. Words are only “about” something by proxy: because conscious minds have the power to invest and derive “meaning” in and from the world. If I write the word “Hungry”, it means something. If grains of sand blow in the wind and land in the shape of the word, it is meaningless. And it is meaningless because the wind possesses no intrinsic intentionality. When and where the shape does ever mean anything, it means something because it expresses conceptual thought “about” the world within the conscious mind of an observer, and the only time we read intentionality of any sort out of the physical world, it is because it was “put there,” so to speak, by a conscious observer, intentionally—not just “due to” a mechanical “cause,” but rather “for” a “reason.” 

Purposes and intentions and representationally experienced understandings of concepts are simply an irreducible part of any explanation that would be capable of giving any sense to what we do when we “understand” language. There would, of course, be a very different sort of causal process—with different causal connecting details—between a human being arranging physical matter into certain shapes and a mindless physical process like the wind arranging the same matter into the same shapes. The question is whether this can possibly be accounted for without intentionality itself as an irreducible part of that sequence of causation—whether the only relevant causal difference rests in anything other than the fact that the wind would not be consciously intending to perform the act that it does for a reason which reason intrinsically involves reference to the fact that the idea which those words represent is “about” something it wants to express something about. The point is that blind physical causality which does not happen for “reasons” at all appears to be incapable in principle of accounting for this.

[8] Hypothetically, we might conceive of an identity theory whose “X is identical to Y” claim is more accurately represented by an analogy like “The mailman named John who comes to your front door every morning is identical to the man named John you see at the bar on Friday nights.” In a case like this, recognizing that two things are “identical” doesn’t mean reducing one to the other. However, what we have in a case like this is one category of event defined in spatio–temporal terms, and what we do in a case like this is recognize a spatio–temporal relationship between a (spatio–temporally defined) thing in that category which we identified at one place and time, and a (likewise spatio–temporally defined) thing in that same category which we identified at a different place and time. In other words, two spatio–temporal events are united by a specifically detailed spatio–temporal relationship between them which can be expressed in a claim like: “When the mailman goes home, he changes clothes and heads out to the bar to drink all night.” In practice, it simply is not possible to extend a theory of this sort to consciousness, because the basic properties we appear to observe within consciousness (subjective experience, intentionality, etc.) seem to belong to different categories altogether than the basic properties the physicalist has defined physical entities as possessing (causality without intentionality, blind process without experience)—and this means the options are either to deny that consciousness really does possess those properties, or else “reduce” them—these are quite simply the only ways to secure a so–called “identity theory,” and so again, any “identity theory” must collapse into either eliminativism or reductive emergentism. Those are the only two options from there—but reductive emergentism collapses, in turn, into eliminativism for the reasons explained above.

[9] “I would say […] that whether its Panamanian debut counts as going into [a state of (inaccurately) identifying quarters] or [a state of (accurately) identifying quarter–balboas] depends on whether, in its new niche, it was selected for its capacity to detect quarter–balboas—[…] by the holder of the Panamanian Pepsi–Cola franchise.”

[10] Or at least try to explain how it could “emerge,” but we’re still getting around to the critique and rejection of that notion. In my view, the only basic form which any proper solution can possibly follow is to accept that intentionality itself represents a fundamental, innate capacity of consciousness, which is accepted as a fundamental phenomena within the world in its own right in turn.

[11] Note, no less, the intentionality that is merely presupposed even here! As we see, for Dennett there are ultimately no thoughts or statements which are determinably “about” anything, either!

[12] Itself a notion intrinsic to intentionality, and not explicable in terms solely of brute causality: thus, Rosenberg rightly derives the conclusion that physicalism would require us to eliminate even this too: “If the physical facts fix all the facts, however, then in doing so, it rules out purposes altogether, in biology, in human affairs, and in human thought-processes too. . . [therefore,] the mind is no more a purpose driven system than anything else in nature. This is just what scientism leads us to expect. There are no purposes in nature; physics has ruled them out . . . If the brain cannot be the locus of original intentionality, then original intentionality just doesn’t exist. But without intentionality, we have to recognize that most of our conceptions about ourselves are also illusions. If plans, projects, purposes, plots, stories, narratives and the other ways we organize our lives and explain ourselves to others and ourselves, all require intentionality, then they too are all illusions.”

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: