[for a brief explanation of this ongoing series, as well as a full table of contents, go here]
More, much more, on epistemology
There is, of course, much more to be said about epistemology, and as usual the proper SEP entry (an extremely valuable peer reviewed resource that has accompanied us throughout this book) is an excellent starting point for further exploration (in this case, Steup 2005). Before leaving the field to move on to philosophy of science, I want to briefly sketch a number of other debates in epistemology that lend themselves to the same kind of analysis I just went into some detail in the case of the concept of knowledge. I have not drawn concept maps for the remaining examples in this chapter, but doing so is an excellent exercise for the interested reader, both to make sure one is able to reconstruct how the various moves and countermoves are logically connected, and to develop a first-hand feeling for philosophical progress so understood. It will help to think of each position to be briefly examined below as a peak in the proper conceptual landscape, whose height depends on how justified the pertinent position happens to be.
A couple of times already in our preceding discussion we have gotten to the point where we really needed to unpack the idea of justification. As it turns out, much has been written about it. To begin with, epistemologists recognize two major approaches to justification: deontological and non-deontological. Deontological Justification (DJ) looks something like this (Steup 2005): “S is justified in believing that p if and only if S believes that p while it is not the case that S is obliged to refrain from believing that p” — which is how, for instance, Descartes and Locke thought of justification. Non-Deontological Justification (NDJ), instead, takes the form: “S is justified in believing that p if and only if S believes that p on a basis that properly probabilifies S’s belief that p” (Steup 2005). Most epistemologists seem to agree that DJ is not suitable for their purposes, at least in part because we have come to understand (post-Hume) that beliefs are not the sort of things over which we have much voluntary control, a high degree of which is required by a deontological approach to justification. Of course, much is packed in the concept of “properly probabilified,” but the point is that assessments of probabilities are more conducive to epistemically valid justification than deontological approaches (which tend to be more suitable — naturally — for moral, or even prudential, situations).
A second way to approach the issue of justification is from the point of view of its sources. In this case the two major positions are evidentialism (Conee and Feldman 1985) and the already mentioned reliabilism (Greco 1999). According to the first, a belief is justified if there is evidence in its favor, where the sources of evidence may be varied — including but not limited to perception, memory, introspection, intuition, etc. Following the latter, evidence is necessary but not sufficient, and it also has to be gathered by reliable means, which is more restrictive then the previous view. Again, a reliable source is then defined as one that “properly probabilifies” a given belief.
There is a third area of conceptual space that allows us to discuss justification, dealing with whether the latter is internal or external in nature (Kornblith 2001). Internalism (Steup 1999) takes it to be the case that whatever justifies a given belief boils down to a particular mental state we are in; this means, incidentally, that evidentialists tend to be internalists, because our evidence for one belief or another is always assessed by introspection of our own mental states. Fine, say the externalists, but the reliability of such evidence is not an internal (or purely internal) matter, which means that reliabilists tend simultaneously to be externalists.  The difference between the two positions is perhaps best fleshed out in cases in which someone has good reasons to accept a belief that is, as a matter of fact, false, as a result of radical deception. Consider, in the typical example, a brain in a vat who thinks he has hands (while he, obviously, doesn’t). In that case, the belief is justified from an internalist/evidentialist perspective (the mental states that form the basis for the belief are accessible), but not from an externalist/reliabilist point of view (since those mental states are, as it turns out, an unreliable source of belief).
We can also go back to the idea of knowledge itself and talk not about its most proper conceptualization, but its structure. Here the two main contenders in contemporary epistemology are foundationalism and coherentism. The first approach (DePaul 2001) — as the name implies — thinks of knowledge as structured like a building, with foundations upon which further knowledge is accumulated. Which implies that some beliefs are doxastically basic, i.e. they do not require any additional justification. It is, however, surprisingly difficult to find unchallenged examples of basic beliefs (give it a try, just as a mind stretching exercise). One proposal often advanced in this context is something along the lines of “It seems to me that the table is round,” which at least some foundationalists would argue is an example of a properly basic belief that cannot be successfully challenged — even if it turned out that the table is, in fact, oval. The problem is that, even if we agree that cases like the above do represent properly basic beliefs, they don’t get us very far unless we can extend the property of basicality to stronger statements, such as: “the table is round.” But the latter belief can be challenged on epistemic grounds, so one has to make a further move, invoking perceptual experience as evidence of both beliefs. Which in turn raises the thorny issue of why we should take perceptual experience to be a proper source of justification of some basic beliefs, given that we know it is not always reliable.  If we can get past these issues, foundationalists then can keep building their edifice of knowledge by deploying non-deductive methods, since to require further growth of knowledge by deductive approaches only would be too demanding — as Descartes quickly found out after engaging in his famous Cogito exercise (Descartes 1637/2000).
The second approach mentioned above is coherentism (BonJour 1999), according to which knowledge is structured more like a web (whiff-o-Quine, Chapter 3) than like a vertical structure with foundations. This means that there is no such thing as properly basic beliefs, as the strength of any given belief depends on its connections to the rest of the web (as well as on the strength of the other strands in the web). A major tool in the coherentist epistemic arsenal is the idea of inference to the best explanation (Lipton 2004). Consider again my belief that the table is round. Is it justified? Well, I could be hallucinating, I could be a brain in a vat, etc.. But, most likely my senses are working properly for a human being, I find myself under decent conditions of illuminations, not too far from the table, etc.. All of which allows me to inferentially converge on what appears to be the best explanation for what I see: the table really is round! Of course, the Cartesians amongst us (are there any left?), might object: couldn’t you being deceived by an evil demon (or, in more modern parlance, couldn’t you be in the Matrix)? Sure, I could, but — given what we think we know about how the world works (i.e., given our web of knowledge!) — that’s just not the best explanation available for my belief that the table is round, although it surely is a logically possible one.
Just like in any other area of philosophical conceptual space there are arguments pro and con both foundationalism and coherentism. Foundationalists, for instance, often deploy a regress argument: without foundation, one is forced to keep looking for justifications for one’s beliefs, and that search can only lead to an infinite regress or to a loop, neither of which are particularly satisfying prospects. However, not all circularity is bad (philosophers often make a distinction between circularity and vicious circularity). After all, one could argue that all deductive knowledge (i.e., great parts of logic and mathematics) is circular, and yet it is hardly to be dismissed on such grounds. Foundationalists can also buttress their position by attacking coherentism from a different angle: a system of beliefs could be entirely coherent and yet make no contact with reality. A well thought out fictional story, for instance, would fit the bill. But here the coherentist has a pretty straightforward response, I think: the web of belief that structures our knowledge of the world includes perceptual experience, and thereby does make contact with empirical reality. Foundationalists better accept this response, because if they retreat to the much more demanding position that knowledge needs logical (as opposed to empirical) guarantees, that would have to apply also to any properly basic (foundational) belief. That would result in a Pyrrhic victory.  Conversely, coherentists can counterattack against foundationalists by asking why (fallible) perceptual experience should be considered as justifying properly basic beliefs. And so on with successive refinements and counter-refinements of each position.
All this said and done, one can simply not leave even such a brief discussion of epistemology without posing the obvious question: if there has been progress in the study of epistemology, does this imply that we then have also made progress against skepticism (DeRose and Warfield 1999)? Skepticism has a long and venerable (some would say irritating) history in philosophy, dating back to the pre-Socratics. Plenty of valiant attempts have been made to overcome it. A modern version of the skeptic argument uses — again — the metaphor of the brain in the vat (BIV), and is therefore referred to by Steup (2005), from which the following discussion is adapted, as the BIV argument. It goes something like this:
(1) I don’t know that I’m not a BIV.
(2) If I don’t know that I’m not a BIV, then I don’t know that I have hands.
(3) I don’t know that I have hands.
This is a valid argument, so any viable response needs to challenge one of its premises — that is, challenge its soundness. Before proceeding, though, we must note (as Steup does) that premise (2) is tightly linked to (indeed, it is the negative version of) the so-called Closure Principle: “If I know that p, and I know that p entails q, then I know that q” — a principle that is definitely prima facie eminently reasonable. The application to our case looks like this: If I know that I have hands, and I know that having hands entails not being a BIV, then I know that I’m not a BIV. But — says the skeptics — the consequent of this “BIV closure” is false, hence its antecedent must be false too: you just don’t know whether you are a BIV or not!
There are several responses to the skeptic’s so-called “closure denial.” Steup examines a whopping five of them (concept map, anyone?): relevant alternatives, the Moorean response, the contextualist response, the ambiguity response, and what one might call the knowledge-that response. Let’s take a look.
A first attack against the BIV argument — a first peak in the relevant conceptual space — is to claim that being a BIV is not a relevant alternative to having hands; a relevant alternative would be, for instance, having had one’s hand amputated to overcome the effects of disease or accident. This sounds promising, but the skeptic can very well demand a principled account of what does and does not count as a relevant alternative. Perhaps relevance logic (Chapter 5) could help here.
Second attack/peak: G.E. Moore’s (1959) (in)famous response. This is essentially an argument from plausibility: the BIV goes through if and only if its premises are more plausible than its conclusions. Which Moore famously denied by raising one of his hands and declaring “here is one hand.” But why, asks (reasonably, if irritatingly) the skeptic? To make a long story short, Moore’s counter to the BIV argument essentially reduces to simply asserting knowledge that one is not a BIV. Which pretty much begs the question against the skeptic. 
Third branch in anti-skeptic conceptual space: the contextualist response. The basic intuition here is that what we mean by “know” (as in “I know that I have hands,” or “I don’t know that I’m not a BIV”) varies with the context, in the sense that the standards of evidence for claiming knowledge depend on the circumstances. This leads contextualists to distinguish between “low” and “high” standards situations. Most discussions of having or not having hands are low standards situations, where the hypothesis of a BIV does not need to be considered. It is only in high standards situations that the skeptical hypothesis becomes salient, and in those cases we truly do not know whether we have hands (because we do not know whether we are BIVs). This actually sounds most plausible to me (pretty high peak on the landscape?), though I would also like to see a principled account of what distinguishes low and high standard situations (unless the latter are, rather ad hoc, limited only to the skeptical scenario). Perhaps things are a bit more complicated, and there actually is a continuum of standards, and therefore a continuum of meanings of the word “know”? 
Fourth: the ambiguity response. Here the strategy is to ask whether the skeptic, when he uses the word “know,” is referring to fallible or infallible knowledge. (This is actually rather similar to the contextualist response, though the argument takes off from a slightly different perspective, and I think is a bit more subtle and satisfying.) Once we make this distinction, it turns out that there are three versions of the BIV argument: the “mixed” one (“know” refers to infallible knowledge of the premises but to fallible knowledge of the conclusion), “high standards” (infallible knowledge is implied in both premises and conclusion), and “low standards” (fallible knowledge assumed in both instances). Once this unpacking is done, we have to agree that the mixed version is actually an instance of invalid reasoning, since it is based on an equivocation; the high-standards version is indeed sound, but pretty uninteresting (okay, we don’t have infallible knowledge concerning our hands, so what?); and the low-standards version is interesting but unsound (because we would have to admit to the bizarre situation of not having even fallible knowledge of our hands!).
Finally: the knowledge-that response, which is a type of evidentialist approach. The idea is to point out to the skeptic that the BIV argument is based on a number of highly questionable unstated premises, such as that it is possible to build a BIV, and that someone has actually developed the technology to do so, for instance. But we can deny these premises on grounds of implausibility, just like we would deny, say, the claim that someone has traveled through time via a wormhole on the ground that we don’t have sufficient reasons to entertain the notions that time travel is possible and that someone has been able to implement it technologically. Yes, the skeptics can deny the analogy, but now the burden of proof seems to have shifted to the skeptic, who needs to explain why this is indeed a disanalogy.
Hopefully the above has allowed us to develop at least a general sense of the epistemological landscape and of how people have been exploring and refining it. It is now time to examine another area of philosophical inquiry which, in my mind, clearly makes progress.
 You can see how you can generate concept maps of different aspects of discussions about justification, and then proceed to connect distinct conceptual peaks on each map to positions on other maps with which they cohere. I suspect someone could turn this into a really nerdy video game…
 How are you doing with that concept map, so far?
 The Greek King Pyrrhus of Epirus was one of my favorite historical villains when I was in elementary school and studied ancient Roman history (the Romans, of course, were the good guys for someone growing up in the Eternal City). He did manage the then inconceivable feat of defeating the Roman legions in open battle, especially thanks to his innovation of bringing in elephants — monsters that were hitherto unknown to the Romans. But his victory at Asculum in 279 BCE caused him so many casualties that he had to acknowledge having lost the war.
 Although an argument could be made that this is not the most charitable reading of Moore. One could read him instead as putting forth an evidentialist argument: we have evidence that we have hands, but no evidence that we are being deceived.
 I know, I know, this is becoming to sound rather Clintonesque. Then again, the former President of the United States did study philosophy at Oxford as a Rhodes Scholar…
BonJour, L. (1999) The Dialectic of Foundationalism and Coherentism. In J. Greco and E. Sosa (eds.), The Blackwell Guide to Epistemology. Blackwell, pp. 117–142.
Conee, E. and Feldman, R. (1985) Evidentialism. Philosophical Studies 48:15–35.
DeRose, K., and Warfield, T. (1999) Skepticism. A Contemporary Reader. Oxford University Press.
Descartes, R. (1637 / 2000) Discourse on Method and Related Writings. Penguin Classics.
DePaul, M. (ed.) (2001) Resurrecting Old-Fashioned Foundationalism. Rowman and Littlefield.
Greco, J. (1999) Agent Reliabilism. Philosophical Perspectives 19:273–96.
Kornblith, H. (2001) Epistemology: Internalism and Externalism. Blackwell.
Lipton, P. (2004) Inference to the Best Explanation. Psychology Press.
Moore, G.E. (1959) Philosophical Papers. Allen and Unwin.
Steup, M. (1999) A Defense of Internalism. In: L.P. Pojman (ed.) The Theory of Knowledge: Classical and Contemporary Readings. Wadsworth, pp. 373–384.
Steup, M. (2005) Epistemology. Stanford Encyclopedia of Philosophy (accessed on 26 June 2012).