The non-apriority of concept width

 

WILLIAM S. LARKIN

 

 

Anti-individualism (AI) is the view that some of our thought contents are such that we could not be thinking those contents in particular without being appropriately related to a specific type of external environment.  We can think of AI as claiming that some of our concepts are wide in the sense that we could not have possessed those concepts in particular had we been related to a different type of external environment.[1]  Privileged access (PA) is the view that we have a distinctive non-empirical way of knowing the contents of our own thoughts that is not available to any one else.  Several authors have attempted to show that combining AI and PA yields the seemingly absurd conclusion that we can have non-empirical knowledge of some relatively specific feature of the external world.[2]  I will show here in a completely general way that no such attempt can possibly succeed, for no one can know a priori that one of her actual concepts is wide.[3] 

The argument that combining AI and PA can yield non-empirical knowledge of specific features of the external world goes something like this:  Let C be a wide concept.  If both PA and AI are true, then the premises of the following argument can be known without any empirical investigation of one’s external environment:[4]

                        P1:       I am thinking a C-thought.

 

P2:       If I am thinking a C-thought, then I am appropriately related to an E environment.

___________________________________________________

C:         So I am appropriately related to an E environment.

 

The exact nature of the environmental relations required for possessing wide concepts and exactly how specific an E environment has to be will depend on the details of a particular anti-individualist thesis.[5]  But an E environment will be specific enough that non-empirical knowledge of it will be intuitively problematic; it will be a more specific type of environment, for example, than one where simply ‘some physical objects exist’ or ‘there is some external world out there’.[6]  For an E environment is one an appropriate relation to which is required for the possession of C in particular and not merely for the possession of concepts in general.[7]  Thus non-empirical knowledge of the above premises can yield, via deduction, non-empirical knowledge of a relatively specific feature of the external world.[8]

I will argue now that no matter which version of externalism is in play, and no matter what the details of the specific concept and relevant environment are; one cannot know P2 without empirical investigation of specific features of one’s external environment.  This is so because one cannot possibly know non-empirically that one of her actual concepts is wide. 

Any attempt to derive non-empirical knowledge of the external world via AI from non-empirical knowledge of what one is thinking presupposes that one can know non-empirically that one of her actual concepts is wide.  Proponents of the strategy effectively acknowledge this:  Michael McKinsey (1991) essentially argues that one can know a priori that a concept is wide in virtue of the fact that possessing it conceptually entails some substantive proposition about one’s external environment and one’s introspective knowledge that one has the concept.[9]  Jessica Brown (1995) essentially argues that one can know a priori that a certain concept is wide by knowing that one is agnostic about that concept’s application.[10]  And Paul Boghossian (1997) essentially argues that one can know a priori that a certain term expresses a wide concept by knowing that one (a) “expresses an atomic concept” with the term, (b) “aims to name a natural kind” with it, and (c) is “indifferent about the essence of the kind that his word aims to name.”[11] 

            The only way to connect up a claim derived from PA with a claim derived from AI is to presume that some actual concept revealed through introspection falls under the general anti-individualist thesis.  P2 of the above argument is not knowable non-empirically unless one can know non-empirically that C is a wide concept.  Only then could one enlist her knowledge of AI to yield non-empirical knowledge of P2.[12]  However, it is simply not possible for one to know that any of her actual concepts are wide:  To say that one of S’s actual concepts C is wide is just to say that S could not have possessed C had she been related to a relevantly distinct environment (a non-E-type of environment).  In other words, C is wide by definition just in case there is some possible world w where S does not possess C because w is relevantly distinct from the actual world a.  Thus S can know non-empirically that C is wide only if S can know non-empirically that there is some world w where she does not possess C because w is relevantly distinct from a.  But as I will now show, S cannot know non-empirically that there is some world w where she does not possess C because w is relevantly distinct from a. 

A subject S might be able to know a priori on the basis of her relevant semantic intuitions that she would not have possessed a concept C in a world w if w is described in rich enough detail.  But S could not go on to know that this independently and richly specified world w is distinct from the actual world a in the relevant respects without knowing (at least something about) what a is like in those respects.  Since C is in fact wide the relevant features of a will be relatively specific features of the external environment, knowledge of which requires some specific empirical investigation.  So S cannot know non-empirically that w is both a world in which she would not have possessed C and a world that is relevantly distinct from a.

            Perhaps, instead, we should start by simply stipulating that some possible world w is relevantly distinct from a no matter how a happens to turn out to be in the relevant respects.  But now w is not specified in enough detail for S to be able to engage her relevant semantic intuitions in order to know a priori that she would not have possessed C in w.  Making the additional stipulation at this point that S would not have had C in w would of course beg the question.  Given that there is no other non-empirical route to establishing that the dependently and minimally specified world w is one in which S would not have possessed C, S can again not know a priori both that w is relevantly distinct from a and that w is a world in which she would not have possessed C.

            So one cannot know non-empirically that one of her actual concepts is wide.  It is therefore not possible to combine knowledge that one is thinking a certain type of thought with knowledge of anti-individualism to derive any non-empirical knowledge of the external world.[13] 

 

 

Southern Illinois University Edwardsville

Edwardsville, IL 62026-1433

wlarkin@siue.edu

 

 

REFERNECES

Boghossian, P. 1997.  What the externalist can know a priori.  Proceedings of the Aristotelian

Society 97: 161-75.

 

Brown, J. 1995.  The incompatibility of anti-individualism and privileged access.

Analysis 55: 149-56.

 

            -- 1999.  Boghossian on externalism and privileged access.  Analysis 59: 52-59.

 

 

Brueckner, A. 1992. What an anti-individualist knows a priori.  Analysis 52: 111-18.

 

-- 2002.  Anti-individualism and analyticity.  Analysis 62: 87-91.

 

 

Falvey, K. 2000.  The compatibility of anti-individualism and privileged access.  Analysis 60:

137-42.

 

Gertler, B.  2004.  We can’t know a priori that H2O exists. But can we know that water does?

Analysis 64:   .

 

Goldberg, S. 2003.  On our alleged a priori knowledge that water exists. Analysis

63: 8--41.

 

McKinsey, M. 1991.  Anti-individualism and privileged access.”  Analysis 51: 9-16.

 

Nuccetelli, S.  1999.  What anti-individualists cannot know a priori.  Analysis 59: 48-51.



[1]  I take concepts to be the constituents of thought contents.  To possess a particular concept is to be able to think certain types of thoughts.  I assume that if one cannot think certain types of thoughts without being appropriately related to a specific type of environment, then the reason is that one cannot possess a particular concept with the appropriate environmental relations.

 

[2]  See McKinsey 1991, Brown 1995, and Boghossian 1997.

 

[3]  Others have raised specific problems with arguments from AI and PA to non-empirical knowledge of the world.  See Brueckner 1992, Falvey 1998, and Brown 1999.  But I will argue in a much more general way that no such argument can succeed.  Others attempts to do this have still been rather narrow in scope, using arguments that seem to apply only to particular versions of anti-individualism, only to specific concepts, or only to natural kind concepts.  My argument will apply to all versions of anti-individualism, does not rely on the specifics surrounding the possession of any particular concept (like water), and will apply to artifact kind terms (or any other type of term purportedly expressing a wide concept) as well as natural kind terms.  See Golderg 2003 for an argument that does seem limited to a particular concept or at the very least to natural kind concepts.  None of Gertler’s criticisms of Goldberg in her 2004 will be effective against my argument.

 

[4]  That is, they can be known without any further specific knowledge of one’s external environment beyond what may have been necessary to acquire the relevant concepts in the first place.

 

[5]  On some views there must be c-stuff in the environment, for example, whereas on other views it may be enough that there be stuff in the environment that could ground appropriate theorizing about c-stuff.

 

[6]  Thus I think that one of Brueckner’s worries about McKinsey’s strategy in his 1992 is unfounded.

 

[7]  The qualification ‘in particular’ in the formulation of AI is necessary to avoid McKInsey’s argument in his 1991 that threatens to reduce AI to a triviality unless it is framed in terms of a conceptually necessary condition on concept possession.  In my formulation I am claiming only that it is metaphysically necessary that one be appropriately related to a certain environment in order to possess certain concepts, and I add ‘in particular’ to avoid the kind of worry that McKinsey raises.

 

[8]  If the argument works, then at the very least one can know non-empirically that one is in the type of environment that allows one to possess the wide concept C.  And that is a relatively specific proposition about one’s external environment, even if one cannot describe what one knows in any other terms.  Knowing that one is in the type of ‘watery’ environment that allows for the possession of the concept water involves knowing something that is in fact fairly specific, even if one cannot say anything else more about the nature of a watery environment.

 

[9]  For criticism see Brueckner 1992, Nuccetelli 1999, Goldberg 2003.

 

[10]  For criticism see Falvey (2000) and Brueckner (2002).

 

[11] For criticism see Brown (1999).

 

[12]  It is presumed but rarely argued in these contexts that one can know the general AI thesis a priori on the basis of armchair philosophical thought experiments.  On my view it may be possible to know a priori the general claim that some concepts are wide (or that some specific merely possible concept is wide) on the basis of some armchair thought experiment, but it is not possible to know that any explicitly specified (actual) concept is wide.

 

 

[13] I would like to thank Tony Brueckner, Sandy Goldberg, John Greco, and Sarah Sawyer for helpful comments on work related to the argument here.