Concepts and Introspection:
An Externalist Defense of Inner Sense
_________________________________
Inner sense models of introspection that explicitly
liken that faculty to ordinary perception are metaphysically unassuming and
naturalistically appealing. But such
models appear to deny the distinctive epistemic character of introspective
judgments by allowing them to be in brute error. Non-observational models of introspection are explicitly
motivated to rule out the possibility of brute error by the need to account for
a qualitative epistemic difference between introspective and perceptual
judgments. I defend inner sense models
against their non-observational rivals by arguing that the apparent
epistemological disadvantage of the former relative to the latter is in fact a
significant metaphysical advantage.
For, only models that allow introspective judgments to be in brute error
are compatible with a pair of plausible metaphysical assumptions about content
individuation and concept awareness.
It is quite natural to think of introspection as a
kind of “inner sense, by means of which the mind intuits itself or its inner
state,” as Kant said; or as a kind of “perception of what passes in a man’s own
mind,” as Locke said. Moreover, inner
sense models are relatively unassuming, metaphysically speaking, and
naturalistically appealing. They posit
a merely contingent causal connection between introspective judgments and their
targets. But this natural way of
thinking about introspection is at odds with an equally natural view about the
distinctive epistemic character of introspective judgments. The introspective judgments we make about
the contents of our own thoughts are quite naturally thought to be on a
qualitatively better epistemic footing than the ordinary perceptual judgments
we make about the external world.
Whereas our ordinary perceptual judgments can be mistaken without any
misstep or malfunction on our part, our introspective judgments seem to be
utterly immune to such brute errors. It
is difficult to see how an inner sense model that explicitly likens
introspection to ordinary perception can account for a qualitative epistemic
difference between introspective judgments and ordinary perceptual
judgments.
Non-observational models of
introspection explicitly aim to account for the distinctive epistemic character
of introspective judgments by positing a necessary constitutive relation
between introspective judgments and their targets. Proponents of non-observational models are thus driven by epistemic
concerns away from a relatively unassuming picture of introspection as a kind
of inner sense to a considerably more sophisticated picture involving a
metaphysically significant and naturalistically suspect constitutive relation. I aim to defend the relatively innocent
inner sense models against their more sophisticated non-observational rivals by
showing that the latter are incompatible with a pair of plausible metaphysical
theses about concept awareness and content individuation. The apparent epistemological disadvantage
that inner sense models suffer in relation to non-observational models turns
out to be a significant metaphysical advantage.[1]
1. Introspection and Brute Error
Introspection, as I will understand it here, is a
faculty by means of which we are aware of, and make judgments about, the
contents of our own occurrent thoughts.
By ‘occurrent thought’ I mean an active and available propositional
attitude state—a non-qualitative mental state that is presently engaged in some
cognitive process (i.e., active) and is neither sub-personal, sub-conscious,
repressed, nor in any similar way inaccessible (i.e., available). I do not want to talk here about our
awareness of those mental states, if there be such, that have an essential
qualitative feel. States that have an
essential qualitative feel, like sensations or experiences perhaps, are such
that we are aware of them simply by being in them, not by targeting them with
some distinctive faculty. Also, I do
not want to talk here about our judgments about the attitudinal component of
propositional attitude states. It may
be through introspection that we are aware of the particular attitude that we
bear to content, but I do not want to assume that here. So introspective judgments, as I will
understand them here, are higher-order judgments about the contents of
lower-order occurrent thoughts and have something like the form ‘I am thinking
that T’, where thinking is intended as a generic, catch-all, place-holder
attitude.
The introspective judgments
we make about the contents of our own occurrent thoughts are epistemically
privileged in two senses. First,
introspective judgments result from privileged access. Each of us has a kind of direct and
non-empirical access to the contents of our own occurrent thoughts that no one
else in fact enjoys.[2] The judgments we make on the basis of this
special access are thus warranted directly, non-empirically, and in a way that
no one else’s judgments about our thoughts are warranted.[3] Second, introspective judgments occupy a privileged
position in our system of beliefs.
Judgments of the form ‘I am now thinking that T’, when made on the basis
of introspection alone, are intrinsically less susceptible to doubt or error
than other contingent judgments. By
their very nature, then, introspective judgments are better suited to a
foundational justificatory role than any other contingent judgments.
A compellingly simple model
of introspection claims that occurrent thoughts, in the absence of any
malfunction or interference, cause higher-order judgments about them via a
reliable internal mechanism.[4] The resulting higher-order introspective
judgments are warranted in virtue of the reliability of the mechanism that
produced them. This type of inner sense
model of introspection will have no problem accounting for privileged access: If the introspective judgments we make about
the contents of our own occurrent thoughts are warranted in virtue of the
reliability of an internal causal mechanism, then they will be directly
and non-empirically warranted in a way that no one else’s judgments about our
thoughts are.[5] But the model will apparently have trouble
accounting for the privileged position of introspective judgments. The basic worry is that an account of introspection
broadly modeled on perception must allow for the possibility of misperception
and thereby be unable to account for the distinctive epistemic character of
introspective judgments.
On an inner sense model of
introspection, there is a merely contingent causal connection between
introspective judgments and the occurrent thoughts they attribute. Introspective judgments are about certain
occurrent thoughts in virtue of being caused by those thoughts. This contingent relation leaves open the
possibility that an introspective judgment gets the content of its target
occurrent thought wrong, even though there is nothing wrong with the agent or
her introspective mechanisms. In other
words, an inner sense model allows that introspective judgments are subject to
what Tyler Burge calls “brute error”—mistakes that betray no cognitive or
rational deficit on the part of the individual making the judgment. But if introspective judgments are warranted
in virtue of the contingent reliability of some causal mechanism and are
subject to brute error, then they seem to be on no better epistemic footing
than ordinary perceptual judgments.
Burge says the following about perceptual brute errors in Burge (1988):
Brute errors do not result from any sort of
carelessness, malfunction, or irrationality on our part. Brute errors depend on the independence of
physical objects’ natures from how we conceive or perceive them, and on the
contingency of our causal relations to them.
The possibility of such errors follows from the fact that no matter what
one’s cognitive state is like...one’s perceptual states could in individual
instances fail to be veridical—if physical circumstances were sufficiently
unfortunate. (Reprinted in Ludlow 1998, p. 120)
If the inner sense model is correct, then
introspective judgments will be like perceptual judgments in the sense that
introspective judgments will be independent of the nature of, and only
contingently causally related to, their objects. These are just the features that give rise to brute errors in the
perceptual case (e.g., misperceptions and hallucinations), and so they could
give rise to brute errors in the introspective case if “circumstances were
sufficiently unfortunate.” In which
case it is difficult to see why introspective judgments should enjoy a position
of epistemic privilege over ordinary perceptual judgments.
Motivated
mainly by the worry that the inner sense model knocks introspective judgments
out of their privileged position, many think that the model is too simplistic. A more philosophically sophisticated account
of introspection says that we are aware of our occurrent thoughts not by having
wholly distinct thoughts about them but by thinking those occurrent thoughts in
a certain way or in a certain cognitive setting. Introspective judgments are about their targets not in virtue of
being caused by them but in virtue of being constituted by them. Higher-order introspective judgments are
constituted in part either by the first-order states themselves or by the contents
of those states. Given this
constitutive relation, introspective judgments are not completely distinct from
their targets. Introspective
judgments are not independent of the nature of their objects and are not merely
contingently causally connected to them.
It follows, as we shall see that introspective judgments are immune to
brute error. In this way
non-observational models of introspection provide an account of the privileged
position enjoyed by introspective judgments over other contingent judgments.
Sydney Shoemaker rejects the
inner sense model under the heading “broad perceptual model.” He specifically rejects the metaphysical
independence of introspective judgments and their targets.[6] Shoemaker claims instead, “second-order
beliefs, and the self-knowledge they constitute, are supervenient on
first-order beliefs and desires plus human rationality and intelligence;” and
he argues that “...normal human
rationality and intelligence plus first-order beliefs and desires gives you
everything, in the way of explanation of behavior, that second-order beliefs
can give you...” (Shoemaker 1988, p.48)
On Shoemaker’s account, introspection works just by thinking occurrent
thoughts in a certain cognitive setting; i.e., with certain background intelligence
and rationality conditions met.
Introspective judgments are constituted by the lower-order states they
attribute plus normal human rationality and intelligence.
Tyler Burge rejects the
inner sense model under the heading “simple observational model.” He rejects the possibility that
introspective judgments are subject to brute error.[7] Burge claims that we introspectively know
that we are thinking a certain occurrent thought “simply by thinking it while
exercising second-order, self-ascriptive powers.” (Burge 1988, p.118) On Burge’s account, introspection works just
by thinking occurrent thoughts in a certain way; i.e., while exercising
self-ascriptive powers. Here the
second-order judgment is constituted in part by the content of the thought that
is self-ascribed. Burge says, “the
intentional content mentioned in the that-clause [of a sentence expressing an
introspective judgment] is not merely an object of reference or cognition; it
is part of the cognition itself...it is thought and thought about in the same
act” (Burge 1996, p. 244).[8]
What
these non-observational models have in common is that they postulate some kind
of necessary constitutive relation between introspective judgments and the
contents of the thoughts they attribute.
Such a view can clearly account for privileged access. If higher-order introspective judgments
literally have the lower-order thought contents they attribute as constituents,
then no one besides the thinker of a certain occurrent thought can form an
introspective judgment about that thought.
Moreover, non-observational models can account for the privileged
position of introspective judgments. As
a result of the necessary constitutive connection between introspective
judgments and their targets, there is no possibility of introspective judgments
getting the contents of their targets wrong.
Whatever the content of the lower-order occurrent thought is, that is
what the higher-order introspective judgment attributes—because it attributes
that content to the first-order thought by actually using that token
content itself. The relevant
constitutive relation, for both Shoemaker and Burge, holds between occurrent
thoughts and introspective judgments only given that the thinker has normal
human rationality and intelligence. But
so long as one has normal human rationality and intelligence, one’s
introspective judgments cannot be mistaken about the contents of one’s own
occurrent thoughts. So,
non-observational models of introspection imply that introspective judgments are
immune to brute error, while inner sense models imply that introspective
judgments are subject to brute error in the same way that ordinary perceptual
judgments are. Thus non-observational
models can, while inner sense models cannot, account for the distinctive
epistemic status of introspective judgments.
I want to mount a defense
here of inner sense by showing that this apparent weakness is in fact a
considerable strength. For: given a
plausible externalist thesis about the individuation of propositional thought
content, it is possible to be introspectively aware of but not that
you have a certain concept; and given a plausible thesis about concept
awareness, only an account of introspection that allows for brute error can
allow for the possibility of being introspectively aware of a concept
but not aware that you have it.
The inner sense model of introspection gains a significant edge over
non-observational models by being compatible with a pair of plausible
metaphysical theses.
2. Concept Awareness
We can distinguish, in principle at least, between
being introspectively aware of the content of a thought and being
introspectively aware that we are thinking a certain type of thought as
follows:
De Re Thought Awareness: If some properly functioning rational agent
S makes an introspective judgment about a thought T, then S is
introspectively aware of thinking T.
De Dicto Thought Awareness: If S makes an
introspective judgment about T that correctly represents its content,
then S is introspectively aware that she is thinking T.
These principles do assume that making an
introspective judgment about a thought is sufficient for awareness of
that thought, but they do not assume that making an introspective judgment
about it is necessary for awareness of the thought. Perhaps one can also be introspectively
aware of a thought by having some introspective experience of it or
higher-order perception about it.[9] Or perhaps one can be introspectively aware
of a thought just by thinking it, in virtue of the thought’s having some
intrinsic qualitative feel. I do not
think our introspective awareness of the contents (as opposed to the attitude)
of thoughts (as opposed to sensations) works in either of these ways, but the
distinction above is not intended to rule these possibilities out.
The distinction between de
re and de dicto awareness of thought contents implies a corollary
distinction between de re and de dicto awareness of
concepts. Since concepts are
constituents of thought contents, we can think of possessing a concept as
having the ability to think certain thought contents. To possess the concept C is to have the ability to think
C-thoughts—thoughts that involve the concept C as a constituent. So if someone is aware of her ability to
think C-thoughts, then she is aware of the concept C. Since one is aware of her ability to think C-thoughts when she is
aware of thinking a C-thought, it follows that being aware of C-thoughts
is sufficient for being aware of the concept C. Likewise for awareness that: If
someone is aware that she has the ability to think C-thoughts, then she is
aware that she has the concept C. Since
one is aware that she has the ability to think C-thoughts when she is aware
that she is thinking a C-thought, it follows that being aware that one
is thinking C-thoughts is sufficient for being aware that one has the
concept C. Putting these ideas together
with the above distinction between de re and de dicto awareness
of thoughts gives us the following:
Concept Awareness (CA):
De Re: If a properly functioning rational agent S forms
an introspective judgment about some C-thought, then she is introspectively
aware of possessing the concept C.
De Dicto: If S forms an introspective judgment about a
C-thought that correctly represents its content, then S is aware that
she has the concept C.
I
take CA to be a plausible metaphysical view about the nature of concept
awareness, as it follows from plausible assumptions about the nature of
awareness and the nature of concepts.
First is the idea that a higher-order thought is sufficient for
awareness of its lower-order target.
Second is the idea that being aware of a thought is sufficient for being
aware of that thought’s constituents. I
will not argue for either of these assumptions here, but neither is
controversial in the present context.
That is, the non-observationalist can perfectly well accept both
ideas.
Now, according to CA,
whether we can in fact be aware of a concept without being aware that we have
it depends on whether we can in fact make an introspective judgment about a
thought that does not correctly represent its content. That, as we have seen, depends on whether a
non-observational model of introspection is correct. If the content of a targeted thought is literally a constituent
of the targeting judgment, one cannot make an introspective judgment about the
content of a certain thought without correctly representing that content. So on such a view, when one is aware of
a concept C by being aware of a C-thought, one is thereby aware that one is
thinking a C-thought and thus aware that she has the concept C. Non-observational models of introspection do
not allow for de re awareness of concepts without de dicto
awareness. This, I will argue, is their
Achilles heal. For a plausible
externalist view about the individuation of thought content implies that it is
in fact possible for one to be aware of a concept but not aware that she has
it.
3. Content
Externalism
The following is a plausible, if not uncontroversial,
metaphysical thesis about the individuation of propositional thought content:
Content Externalism (CE):
The contents of
certain thoughts do not supervene on the intrinsic properties of their thinker
but are determined in part by their thinkers’ physical and/or social
environment.
That is, two individuals may be identical with
respect to all of their intrinsic properties (including their
neuro-physiological properties and narrowly described functional properties)
and yet be thinking thoughts with different contents as a result of being
related to different environments.
Since concepts are the constituents of thought contents, we can think of
CE as a thesis about concept acquisition according to which some
concepts are such that an individual cannot acquire them without being
appropriately related to a certain type of environment. For famous example, our concept water
is such that one cannot acquire it unless one has been appropriately related to
an H2O environment. If one
had been related instead to a superficially indistinguishable but chemically
distinct substance XYZ, then one could not have acquired the concept water
that refers exclusively to samples of H2O. One would have instead acquired the concept twater that
refers to samples of XYZ.[10]
Content externalism is
committed to the possibility that a properly functioning rational agent can
undergo a change in conceptual repertoire without realizing that she has. For CE implies that a properly
functioning rational agent can undergo a change in conceptual repertoire simply
in virtue of an unwitting change in external environment.
CE says that two individuals
with all the same intrinsic properties I can nevertheless have distinct
conceptual repertoires as a result of having distinct extrinsic properties E
and E*. For example, Oscar can
possess a concept C that refers exclusively to c-stuff, while his intrinsic
duplicate Toscar possesses a concept C* that refers exclusively to some
distinct c*-stuff.[11] If such twins are possible, then it is possible
for an individual Bob, who is a properly functioning rational agent, to undergo
a change in conceptual repertoire without being aware that he has. Here is how: Let Bob at time t1 be just like Oscar. He possesses C, has intrinsic properties I, and has
extrinsic properties E. Now
imagine that between t1 and t2 Bob undergoes an unwitting change in environment
whereby he acquires extrinsic properties E*.[12] Imagine that Bob has been unwittingly
switched to a c*-stuff environment and has spent time interacting with c*-stuff
and native C* users in the very same ways that Toscar had. Bob now appears to meet all of the necessary
conditions for possessing a concept that refers to c*-stuff. Toscar has a concept that refers to
c*-stuff, and Bob now has all the same relevant properties as Toscar. After interacting with c*-stuff and native
C* users in just the ways Toscar did, it seems we should attribute to Bob, just
as we do to Toscar, (a) the capacity to talk and think about c*-stuff without
having to do so demonstratively, and (b) the capacity to successfully
communicate and share beliefs about c*-stuff with others in his
environment. If we do attribute these
capacities to Bob, then we must attribute to him a concept that refers to
c*-stuff.
One might object at this
point that there is an important relevant difference between Bob and
Toscar. Bob was related to c-stuff at
t1 but Toscar was not. And perhaps
having not been related to c-stuff is a necessary condition for having a
concept that refers to c*-stuff. I do
not think so: Imagine that some individual Rob had from the start been
appropriately related to both c-stuff and c*-stuff. In such a case, it is difficult to deny that Rob could acquire a
concept that refers at least in part to c*-stuff, even though he has been
appropriately related also to c-stuff.
Thus having been related to c-stuff does not bar Rob from having a
concept that refers at least in part to c*-stuff. Of course, we might want to deny that Rob has a concept like C*
that refers exclusively to c*-stuff.
We may instead want to say that Rob has an amalgam concept that
refers disjunctively to either c-stuff or c*-stuff. Still, Rob has a concept that refers in part to c*-stuff
even though he has been related to c-stuff.
So having not been related to c-stuff is not a necessary condition for
having a concept that refers to c*-stuff.
Coming back to the case
where Bob starts in a c-stuff environment and then is switched to a c*-stuff
environment, I want to claim only that Bob will acquire a concept that refers in
part to c*-stuff. Let’s call this
new concept C# and leave open whether C# refers exclusively to c*-stuff or
disjunctively to either c*-stuff or c-stuff.
Regardless, C# is an externally individuated concept—one that cannot be
acquired without being appropriately related to a certain type of
environment. C# is not a compound
concept equivalent to that expressed by some descriptive phrase; just like the
concept water (or twater) is not equivalent to that expressed by
‘odorless, colorless, potable liquid that flows in the faucets and rivers’ (or
any such description). I also want to
claim only that acquiring C# will involve some change in Bob’s
conceptual repertoire—at t1 he did not have any concept that referred (at all)
to c*-stuff, but at t2 he does. I do
not want to take a stand on whether C# will replace or be an addition
to C in S’s conceptual repertoire.[13]
So
if CE is true, then a properly functioning rational agent Bob can
acquire a new concept simply through an unwitting change in environment. In such a situation Bob is unaware that
there has been a change in his conceptual repertoire. The change in his conceptual repertoire is solely the result of a
change in his environment of which, by hypothesis, Bob is unaware. Still, we must admit that Bob is aware of
his new concept C#. By hypothesis there
is nothing wrong with Bob’s introspective faculties, and there is no reason to
think that there are gaping holes in Bob’s consciousness wherever his new C#
thoughts are. He is aware of these new
thoughts when he has them. Thus by CA
(de re), Bob is aware of the concept C#.
4.
Possessing Concepts Unawares
Our hero Bob is aware of his new concept C# at t2,
and so we might say that he is aware of a change in his conceptual
repertoire. But Bob is not aware that
there has been a change in his conceptual repertoire from t1 to t2. Why is that? Why does Bob fail to realize that there has been a change in his
conceptual repertoire? I contend it is
because Bob, though he is aware of C#, is not aware that he
possesses C#. If he does not even
realize that he has C#, then he certainly does not realize that C# is a new
concept. Bob’s not being aware that he
has C# would explain why he does not realize that there has been a change in
his conceptual repertoire.
There is, it would seem,
another possible explanation for why Bob fails to realize that there has been a
change in his conceptual repertoire.
Perhaps Bob fails to realize that there has been a change in his conceptual
repertoire because, though he is aware that he has C#, he is not aware that C#
is a new concept. On this
explanation Bob is aware that he has C# but fails to realize that it is a new
concept, because he systematically mistakes C# for his old concept C. Since it sounds right to describe Bob as
mistaking C# for C, this explanation is attractive. But it is in fact unstable.
For if Bob really does systematically mistake C# for C, and I think that
he does, then Bob cannot properly be said to be aware that he has C#.
If Bob systematically mistakes C#
for C, then he never realizes that C# is distinct from C and never becomes
aware of C# as such; but this is just what it takes for Bob to
move from merely being aware of C# to being aware that he has
C#. So if Bob systematically mistakes
C# for C, then Bob is not aware that he has C#.
An analogy may be helpful
here: Imagine that you have a student
Sherry in your Introduction to Philosophy class. The class is relatively small and you are fairly well acquainted
with all of your students. You are well
enough acquainted with Sherry in particular to be aware that Sherry is
in your class. But now imagine that
midway through the semester and unbeknownst to you, Sherry’s twin sister Terry
begins coming to your class in Sherry’s stead.
As you lecture and engage in discussion with the class, you are aware of
Terry when she attends. But to move
from being aware of Terry in your class to being aware that Terry
is in your class, you would need to realize that Terry is not Sherry and become
aware of Terry as Terry. Since
you systematically mistake Terry for Sherry, though, you never realize that
Terry is not Sherry and never become aware of Terry as Terry. Thus you never become aware that Terry is in
your class.
In general, the difference
between being aware of the existence of an object X and being aware that
X exists involves the difference between being able to demonstratively refer to
X and having a specific and stable concept of X. Being able to demonstratively refer to X at time t1 allows you to
think about X at t1, and that requires some present acquaintance with X at
t1. Having a specific and stable
concept of X allows you to think about X even when you are not presently
acquainted with X, and that requires some dispositional ability to re-identify
X across various relevant contexts. If
someone systematically mistakes X for some distinct thing Y, then that
person lacks the ability to re-identify X across any set of relevant
contexts. Thus if someone
systematically mistakes X for Y, then that person does not have a concept of X
and so cannot be aware that X exists.
If you systematically
mistake Terry for Sherry, then you lack the ability to re-identify Terry across
relevant contexts (i.e., your class periods).
That means you lack a Terry-concept and so cannot be aware that Terry is
in your class. The same goes for Bob
and his concept C#. If Bob
systematically mistakes C# for C, then he lacks the ability to re-identify C#
across relevant contexts (i.e., Bob’s occurrent thoughts). That means Bob lacks a meta-concept
of C# and so cannot be aware that he has C#.[14] It is not possible to explain why Bob fails
to realize that there has been a change in his conceptual repertoire by
claiming that Bob is aware that he has C# but systematically mistakes C# for
C. For Bob cannot systematically
mistake C# for C and yet be aware that he has C#. Bob fails to realize that there has been a change in his conceptual
repertoire because he is not aware that he has C#.
I suspect that the biggest
source of resistance to the above argument are lingering doubts about the very
possibility of possessing a concept unawares—that is, the very possibility of
having a concept but not being aware that you have it. Part of the reason for this general worry I
think is the common notion, put into rather uncommon terminology, that concepts
are both self-presenting and transparent. As self-presenting, concepts would be such
that if you have them you are aware that you do; and as transparent, concepts
would be such that if you are aware of them at all you are aware of everything
about them. If concepts were both
self-presenting and transparent, possession of a concept would imply awareness
of the concept that would in turn imply an awareness of all there is to know
about that concept. So if concepts were
both self-presenting and transparent, possession would rule out the possibility
of misrepresentation. If concepts were
both self-presenting and transparent, one could not possess a concept unawares.
However, it is at least
conceivable, on independent grounds, that concepts are neither transparent nor
self-presenting. For it is at least
conceivable that concepts should be understood as relatively raw discriminatory
abilities. To have the concept water,
for example, might just be to have the ability to discriminate samples of water
from samples of non-water across relevant contexts. Since it is a commonplace that one can have an ability without
realizing it or knowing much about that ability, it should not be strange on
this understanding of concepts that one can have a concept without realizing
that she does and/or without knowing everything about it. The ability to discriminate samples of water
from samples of non-water need not be transparent or self-presenting. Since it is therefore at least conceivable
that concepts are neither transparent nor self-presenting, it follows that the
very idea of possessing a concept unawares is not incoherent.
Another general source of
strangeness about a case like Bob’s is that it involves having a concept C#
without having a meta-concept of C#.
This general idea of possessing a concept without having a concept of
that concept can seem pretty strange, too; but this strangeness can again be
ameliorated by thinking of concepts as discriminatory abilities. If we understand concepts as discriminatory
abilities, then to have a meta-concept of C# involves having the ability to
discriminate C#-thoughts from non-C#-thoughts, in the same way that having the
concept water involves having the ability to discriminate samples of
water from samples of non-water. So to
say that Bob lacks a meta-concept of his concept C# is just to say that Bob
lacks the ability to discriminate his C#-thoughts from his non-C#-thoughts; and
that seems a very natural thing to say about Bob’s situation. Moreover, Bob’s lack of ability to
discriminate C#-thoughts from non-C#-thoughts need not undermine his ability to
discriminate samples of c#-stuff (that to which the concept C# refers) from
samples of non-c#-stuff. Thus it should
not seem so strange that one can possess a concept and yet lack the
corresponding meta-concept.[15]
I have argued that if we
accept content externalism, which I take to be a coherent view, then we can
tell a coherent story in which someone can properly be said to be aware of a
concept but not that he has it.
In this situation, one possesses a concept but does not realize that she
does—one has a concept but lacks the corresponding meta-concept. Such a situation, I have just suggested,
should not seem so strange, and cannot be ruled out as incoherent if we
understand concepts as discriminatory abilities. Content externalism implies that one can possess a concept
unawares, and thus that concepts are neither transparent nor self-presenting. Going along naturally with this implication
is an understanding of concepts as discriminatory abilities and a practical
distinction between concepts and meta-concepts. I take these views about the nature of concepts to be significant
implications of an externalist account of the individuation of thought content.
5. Conclusion
Content externalism implies that a properly
functioning rational agent can acquire a new concept but fail to realize that
there has been a change in his conceptual repertoire. Bob fails to realize that he has acquired C# after an unwitting
switch into a new environment. Such a
failure to realize that there has been a change in one’s conceptual repertoire
must derive from a more basic epistemic shortcoming. Since by hypothesis the relevant agent is otherwise properly
functioning, the problem must stem from the agent’s inability to discriminate
the new thoughts from certain old thoughts.
Bob is unable to discriminate his new C#-thoughts from his old
C-thoughts. This kind of inability to
discriminate one concept from another manifests the absence of de dicto
awareness of that concept. Because Bob
cannot discriminate C#-thought from certain alternatives across relevant
contexts, he lacks a meta-concept of C# and so cannot be aware that he has
C#. When someone is not aware that
there has been a change in her conceptual repertoire, it is because she is not
aware that she has a certain concept.
Bob’s failure to realize that there has been a change in his conceptual
repertoire is due to his lack of de dicto awareness of his new concept
C#.
Thus it is that content
externalism implies that it is possible for some properly functioning rational
agent to be introspectively aware of a concept but not that he
has it. That argument can be explicitly
summarized now as follows:
P1: If CE
is true, then it is possible for some properly functioning rational agent Bob
to acquire a new concept C# between times t1 and t2 without realizing that
there has been a change in his conceptual repertoire.
P2: In such a
situation, Bob will be introspectively aware of the concept C# (in
virtue of his ability to make introspective judgments about his C#-thoughts).
P3: If Bob is
aware of C# but not aware that there has been a change in his conceptual
repertoire from t1 to t2, then either (a) Bob is not aware at t2 that he
has C#, or (b) Bob is aware that he has C# but systematically mistakes C# for a
concept he had at t1 (namely, C).
P4: It cannot
be the case that Bob systematically mistakes C# for C but is nevertheless aware
that he has C#.
C: So if CE
is true, then it is possible for a properly functioning rational agent to be
introspectively aware of a concept he possesses and yet not aware that
he possesses it.
Now recall that CA
implies that the only way to be introspectively aware of a concept C#
but not aware that you have it is to systematically make introspective
judgments about your C#-thoughts that misrepresent their contents. Therefore, CE and CA together
imply that it is possible for some properly functioning rational agent to
systematically make introspective judgments about his C#-thoughts that
misrepresent their contents. In other
words, CE and CA together imply that introspective judgments are
not immune to brute error. Bob’s
introspective judgments about the contents of his C#-thoughts, for example, are
in brute error.
Since
CE and CA together imply that some introspective judgments are
subject to brute error, their conjunction is incompatible with any view that
maintains that introspective judgments are immune to brute error. But that is just what non-observational
models of introspection do. Such models
maintain that in properly functioning rational agents there is a necessary
constitutive relation between introspective judgments and their targets; which
relation rules out the possibility of introspective judgments that get the
contents of the thoughts they attribute wrong.
Non-observational models of introspection are therefore incompatible
with the conjunction of CE and CA.
Inner sense models of
introspection, on the other hand, allow for the possibility of brute error by
maintaining that in properly functioning rational agents there is merely a
contingent causal relation between introspective judgments and the thought
contents they attribute. So inner sense
models allow for the possibility, exemplified by our hero Bob, of a properly
functioning rational agent being introspectively aware of a concept but
not that he has it. Inner sense
models of introspection are therefore compatible with the conjunction of CE
and CA. Insofar as CE and
CA are plausible metaphysical theses, then, inner sense models enjoy a
significant metaphysical advantage over their non-observational rivals. The very thing that prompted people to reject
inner sense models in favor of their non-observational rivals is, as it turns
out, the very reason people should reject non-observational models in favor of
inner sense.
REFERENCES
Armstrong,
D. [1968] A Materialist Theory of Mind.
London: Routledge and Kegan
Paul.
Armstrong,
D. [1981] “What is Consciousness?”, in Block, et.al. (eds.) The Nature of
Consciousness and Other Essays. Ithaca: Cornell University Press.
Boghossian,
P. [1989] “Content and Self-Knowledge,”
Philosophical Topics 17: 5-26.
Burge,
T. [1988] “Individualism and Self-Knowledge,”
The Journal of Philosophy
LXXXV, 11: 649-63.
Burge,
T. [1996] “Our Entitlement to Self-Knowledge,”
Proceedings of the
Aristotelian Society XCVI: 91-116.
Falvey,
K. and Owens, J. [1994]. “Externalism, Self-Knowledge, and
Skepticism,”
Philosophical Review, 103: 107-137.
Gibbons,
J. [1996] “Externalism and Knowledge of Content,” The Philosophical
Review 105: 287-310.
Heil,
J. [1988] “Privileged Access,” Mind,
47: 238-251 (Reprinted in Ludlow
[1998])
Ludlow,
P. [1995] “Externalism, Self-Knowledge, and the Prevalence of
Slow-Switching,”
Analysis 55: 45-49.
Ludlow,
P. and Martin, N. (eds.) [1998] Externalism and Self-Knowledge. Stanford:
CSLI Publications.
Lycan,
W. [1987] Consciousness. Cambridge: MIT Press.
Putnam,
H. [1975] “The Meaning of ‘Meaning’,”
in Mind, Language, and Reality.
Cambridge:
Cambridge University Press.
Rosenthal,
D. [1986] “Two Concepts of Consciousness,”
Philosophical Studies 49: 329-59.
Shoemaker,
S. [1988] “On Knowing One’s Own Mind,” Philosophical
Perspectives
2, Epistemology: 183-209. (Reprinted in Shoemaker [1996])
Shoemaker,
S. [1994] “Self-Knowledge and ‘Inner-Sense’,”
Philosophy and
Phenomenological Research LIV: 249-314. (Reprinted in Shoemaker [1996])
Shoemaker,
S. [1996] The First-Person Perspective and Other Essays. Cambridge:
Cambridge University Press.
Warfield,
T. [1997] “Exernalism, Self-Knowledge, and the Irrelevance of
Slow-Switching,”
Analysis 57: 232-237.
[1] I say ‘apparent’ epistemological disadvantage because I argue elsewhere that even if introspective judgments are immune to brute error, there remains an important qualitative difference between introspective and ordinary perceptual judgments—one that is sufficient to ground the distinctive epistemic privilege enjoyed by introspective judgments over ordinary perceptual judgments.
[2] If some inner sense model turns out to be correct, then it will be in principle possible to be aware of someone else’s thought contents in virtually the same way that we are aware of our own contents; though it may turn out to be a physical impossibility, depending on what the details of the correct inner sense account turn out to be and on what the laws of nature turn out to be. If some non-observational model is correct, however, then it is not even metaphysically possible for one to be aware of another’s thought contents in the same way that one is aware of one’s own contents.
[3] A belief is directly warranted if its warrant does not depend on inferences from other beliefs or experiences. A belief is non-empirically warranted if its warrant does not depend on sense experience or empirical investigation of one’s environment, at least not on any more of such than is needed to entertain the content of the belief. I take non-empirical warrant to be a broader category than a priori warrant.
[4] Something like this model has been defended by David Armstrong in his (1968) and (1981) and by Willian Lycan in his (1987), except that those versions of the inner-sense model take the higher-order state that is causally produced via a reliable mechanism to be a perception/experience rather than a thought/judgment. I am favoring here a higher-order thought account like that defended by David Rosenthal in his (1996).
[5] Others, of course, must rely on inferences from empirical evidence of our behavior or empirical investigation of our environments in order to arrive at warranted beliefs about what we are thinking. [See fn. 2 above]
[6] See Shoemaker (1988) and (1994).
[7] See Burge (1988) and (1996).
[8] There are other proponents of non-observational models—models of introspection whereon introspective judgments are immune to brute error because of some metaphysically necessary constitutive relation between introspective judgments and the occurrent thoughts they attribute:
“I have been emphasizing the fact that when a thinker self-ascribes an attitude with an intentional content, he redeploys the very same [my emphasis] concepts which are constituents of the intentional content of the first-order attitude.” Peacocke, “Our Entitlement to Self-Knowledge”, p. 278 in Ludlow [1998]
“Consider again my second-order introspective state M*. We are supposing that externalism is correct, hence that the content of M* is determined by by some state of affairs, A*, that is at least partly distinct from M*. What, now, is to prevent A* from determining an intentional content for M* that includes [author’s emphasis] the content of M? What for instance keeps our simplified theory from allowing that a causal relation of a certain sort endows my introspective thought with a content encompassing [my emphasis] the content of the thought on which I am introspecting? The envisaged causal relation might plausible be taken to include as a component the causal relation required to establish the content of the state on which I am introspecting, and it might include much more as well.” Heil, “Privileged Access”, p. 138 in Ludlow [1998]
“…there is a temptation to think that externalism gives rise to the possibility that one simply misidentifies the content of one’s own thought, in the sense that one might think that one is thinking that p, when in fact one is thinking the twin thought, p*. But this temptation should be resisted, because it arises from a failure to appreciate that externalism holds at second-intention. Just as I cannot think that water is wet unless my environment satisfies certain features, so I cannot think that I am thinking that water is wet unless my environment satisfies the same features.” Falvey and Owens, [1994]
“The fact that the first-order thought determines the content of the second-order belief guarantees the relevant sameness of content. Since the second-order belief inherits its content from the first-order thought, it makes no difference whatsoever what determines the content of the first-order thought….A common theme among many externalist replies to the self-knowledge objection is that just as the environment determines the contents of our first-order thoughts, the environment also determines the contents of our second-order thoughts. I think it is more informative to say that the first-order thought determines the content of the second-order belief.” Gibbons [1996]
[9] Armstrong (1968) and Lycan (1987) hold such a higher-order perception model of introspection.
[10] See Putnam (1975).
[11] For those who like their examples a bit more concrete, think of C as the concept water , C* as the concept twater , c-stuff as H2O, and c*-stuff as XYZ. I prefer the more general formulation in the text for two reasons: (1) To avoid any complicating issues deriving from differing intuitions about actual concepts like water, and (2) To emphasize that such a scenario is made possible simply by the possibility of intrinsic twins with distinct conceptual repertoires (which is immediately implied by CE and its denial of a certain supervenience claim), and does not involve any further controversial assumptions.
[12] The duration between t1 and t2 may be considerable given that some of the relevant environmental relational properties may take a long time to acquire.
[13] The kind of situation that I have described here for Bob is what is commonly known as a “slow-switching” scenario and are discussed in Boghossian (1989); Burge (1988) and (1998); John Gibbons (1996); Peter Ludlow (1995); and Warfield (1997). There has been a lot of discussion but little agreement concerning slow-switching cases. I have tried to avoid certain controversies by remaining non-committal on whether C#=C* and on whether C# is an addition to Bob’s conceptual repertoire or a replacement for C.
[14] A meta-concept of C may be thought of as a distinct concept with C as its referent, or it may be thought of as the ability to employ C at the second-order level and higher.
[15] Having a concept can also be thought of as having the ability to think certain types of thoughts. Having the concept C#, for example, allows Bob to think C#-thoughts—that is, thoughts that are about c#-stuff (where the aboutness is not merely a matter of Bob being able to demonstratively refer to c#-stuff). Having a meta-concept of C# would allow Bob to think not about c#-stuff but about his concept C# or about his C#-thoughts. So if Bob lacks a meta-concept of C# it does not hinder his ability to think about c#-stuff but rather his ability to think about his C#-thoughts. Again, it should not seem so strange that someone can possess a concept C# and yet lack a meta-concept of C#.