1. Joined
    24 Apr '05
    Moves
    3061
    27 Sep '11 20:09
    Originally posted by twhitehead
    Suppose P is in fact false. Suppose S believes P on evidence that is extremely good but not sufficient to render P certain. Can the fallibilist claim that S knows P?
    If P is false, then it is not the case that S knows P. That follows directly from the truth condition.

    I still do not really understand why you think there is some conflict between the truth condition and the fallibilist condition. Can you explain to me why I should think (1) and (2) below are inconsistent? If not, then what's the problem?

    (1) If S knows that P, then P is true.
    (2) S can know P without having certainty that P.
  2. Donationbbarr
    Chief Justice
    Center of Contention
    Joined
    14 Jun '02
    Moves
    17381
    27 Sep '11 20:192 edits
    Originally posted by twhitehead
    Until I understand it. Its not making any sense to me.
    This is the basic analysis of knowledge:

    S knows P if and only if: (1) S believes P, (2) S is justified in believing P (Roughly, S believes P on the basis of R, a set of reasons to which S has access that indicate that P is very probable), (3) P is true, & (4) [the Gettier condition] Some appropriate causal/explanatory/counterfactual-sustaining relation obtains between the truth of P and the availability of R to S.

    If (2) is not met, then S's belief is nothing more than a lucky guess. If (3) is not met, then at most S has a really well justified belief. (4) is complicated, but the point is that S may have good reasons for believing P but those reasons may be "lucky", and unrelated to the truth of P.

    The fallibilist is claiming that (2), the justification condition, can be met even if R is insufficient to conclusively show or prove with certainty that P is true. That is, S can know P on the basis of R when R merely shows that P is very, very likely. Of course, P has to also be true for S to know P.
  3. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    27 Sep '11 20:21
    Originally posted by LemonJello
    If P is false, then it is not the case that S knows P. That follows directly from the truth condition.

    I still do not really understand why you think there is some conflict between the truth condition and the fallibilist condition. Can you explain to me why I should think (1) and (2) below are inconsistent? If not, then what's the problem?

    (1) If S knows that P, then P is true.
    (2) S can know P without having certainty that P.
    It just seems illogical. It also doesn't seem to agree with the lottery example, but maybe I need to go back and re-read that as I must have missed something.
    Lets say there are 10 tickets in the lottery. I hold a ticket prior to the draw. I say I 'know' I wont win. Can a fallibalist say he 'knows' he wont win? Does he know he wont win, or must he wait for the result of the draw?
  4. Joined
    24 Apr '05
    Moves
    3061
    27 Sep '11 20:571 edit
    Originally posted by twhitehead
    It just seems illogical. It also doesn't seem to agree with the lottery example, but maybe I need to go back and re-read that as I must have missed something.
    Lets say there are 10 tickets in the lottery. I hold a ticket prior to the draw. I say I 'know' I wont win. Can a fallibalist say he 'knows' he wont win? Does he know he wont win, or must he wait for the result of the draw?
    We can analyze this case with respect to the conditions laid out by bbarr in his post preceding this one of yours (and we can ignore the Gettier condition for our purposes here).

    So, in this case S draws 1 of 10 lottery tickets. We suppose S believes P, where P is the proposition that this ticket he has drawn will not win. That satisfies condition (1). Is condition (2), the justification condition, satisfied? This is where fallibilists and infallibilists (and even fallibilists and other fallibilists) will disagree. In this case, S believes P on the basis that his ticket has 0.9 probability of not winning from his epistemic situation, which is pretty high. An infallibilist would deny that this is good enough to meet condition (2). A fallibilist could also deny that it is good enough, but other fallibilists could say that it is good enough to meet condition (2). (In the lottery example, we can make n as large as we want, such that we can meet any fallibilist threshold for condition (2) to be met.) But even if conditions (1) and (2) are met, whether or not S knows that P still hinges on whether or not condition (3), the truth condition, is met. Regarding your question of waiting for the results or not, I suppose this issue is also debatable. Some will hold that claims about the future do not have determinate truth values, or some such, but that ushers in its own problems. But, barring this and related objections, there should already be a fact of the matter concerning whether or not S's ticket will or will not win, which again feeds into whether or not condition (3) is met.
  5. Standard memberkaroly aczel
    The Axe man
    Brisbane,QLD
    Joined
    11 Apr '09
    Moves
    102845
    28 Sep '11 07:55
    I've been enjoying this thread. Twitehead has asked more or less the question I was goin to ask

    [[[Bbarr]]]] - On that last page you were saying how our beliefs are related the the [outside] world we percieve, how we learn through our senses to cross check/get more evidence for a certain belief.

    I suspect we get a lot of the outside/world stuff right,[science], but fail when it comes to understanding others (other people).
    Whether we see other people as potential life lesson givers (as I sometimes do) or just another person in the street , of no real interest(as I sometimes do), we always seem to misunderstand people.
    We can predict the weather much better.

    Do you know of this problem?
  6. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    28 Sep '11 10:12
    Originally posted by LemonJello
    But, barring this and related objections, there should already be a fact of the matter concerning whether or not S's ticket will or will not win, which again feeds into whether or not condition (3) is met.
    I still don't get it.
    Does S know his ticket wont win or not? If S holds all 10 tickets, can he claim to know, for each ticket, that it won't win? Clearly he will be wrong for one of those tickets. Why does one need a complicated conjunction formula? Why don't we simply say S is fallible? ie he will be wrong 1 out of 10 times.
  7. Joined
    24 Apr '05
    Moves
    3061
    28 Sep '11 16:163 edits
    Originally posted by twhitehead
    I still don't get it.
    Does S know his ticket wont win or not? If S holds all 10 tickets, can he claim to know, for each ticket, that it won't win? Clearly he will be wrong for one of those tickets. Why does one need a complicated conjunction formula? Why don't we simply say S is fallible? ie he will be wrong 1 out of 10 times.
    Does S know his ticket wont win or not?

    Not sure how else to put it: S knows P if and only if those four conditions outlined by bbarr are met. Just from the bare details of the hypothetical you put before me (that S, a fallibilist, picks a ticket in a 10-ticket lottery and claims to know that his ticket will not win); that is not enough information to tell whether in fact S knows that his ticket will not win.

    If S holds all 10 tickets, can he claim to know, for each ticket, that it won't win?

    If we suppose that, for example, 0.9 epistemic probability is good enough to meet the justification condition, then presumably S can justifiably claim to know, for each ticket, that this ticket will not win. Regardless, in that case he would still be mistaken in that claim for one of the tickets.

    Clearly he will be wrong for one of those tickets.

    Yes.

    Why does one need a complicated conjunction formula?

    Do you have a better analysis of knowledge? For example, can you offer reasons to believe that any of the four conditions outlined by bbarr is not necessary for knowledge? Can you offer reasons to believe that the four conditions outlined by barr are not collectively sufficient for knowledge? (By, the way, here by "conjunction formula" I do not know if you are talking about the analysis of knowledge outlined by bbarr on this page, or if you are talking about the epistemic closure principle, or the idea of whether or not it follows that S can justifiably believe the conjunction if S justifiably believes each of the conjucts, or some such. If you are talking about bbarr's earlier lottery example argument, I really think you still just do not understand his argument: even if bbarr's argument is successful, you can still can cling to fallibilism; but in that case you would need to jettison the epistemic closure principle that was assumed in the argument.)

    Why don't we simply say S is fallible? ie he will be wrong 1 out of 10 times.

    How would that answer the question about which you keep pressing me for an answer (the question of whether or not, in fact, S knows P in your hypothetical)?
  8. Donationbbarr
    Chief Justice
    Center of Contention
    Joined
    14 Jun '02
    Moves
    17381
    28 Sep '11 20:30
    Originally posted by karoly aczel
    I've been enjoying this thread. Twitehead has asked more or less the question I was goin to ask

    [[[Bbarr]]]] - On that last page you were saying how our beliefs are related the the [outside] world we percieve, how we learn through our senses to cross check/get more evidence for a certain belief.

    I suspect we get a lot of the outside/world st ...[text shortened]... misunderstand people.
    We can predict the weather much better.

    Do you know of this problem?
    I'm not sure I know to which problem you're referring. We can understand or fail to understand people in a host of ways. I can predict my friends' behavior much better than scientists can predict the weather. But sometimes I fail to understand my own motivations. Perhaps you could clarify your question?
  9. Standard memberkaroly aczel
    The Axe man
    Brisbane,QLD
    Joined
    11 Apr '09
    Moves
    102845
    28 Sep '11 20:531 edit
    Originally posted by bbarr
    Your problem here is that you're confusing the point of the truth condition with the point of the justification condition on knowledge. When philosophers aim to explicate the concept 'knowledge' or provide an analysis of it, they attempt to lay bare all the conditions that must be met in order for some relation between a subject and the world to count as an i Knowledge does not require certainty, as you have claimed earlier in this very thread.
    Particurlary the second paragraph here.

    I'm not sure of where to put the "world" after having read this because "world of people" and "world of nature" dont seem compatible.

    Even with your friend, some strange anomonlie would prolly happen if continued to make friends , who you could predict.

    For example you may have a whole other group dislike you for apparently no rational reason.

    My example ,i believe is a common one.

    So, in short, I see people as the main problems for misrepresenting the world, for reasons either real or imagined, for better of for worse, we still Fear each other.

    We/They represent the most unpredictable part of your "getting to know the world" problem. Agree? Or...
  10. Standard memberkaroly aczel
    The Axe man
    Brisbane,QLD
    Joined
    11 Apr '09
    Moves
    102845
    28 Sep '11 20:57
    Originally posted by bbarr
    I'm not sure I know to which problem you're referring. We can understand or fail to understand people in a host of ways. I can predict my friends' behavior much better than scientists can predict the weather. But sometimes I fail to understand my own motivations. Perhaps you could clarify your question?
    Sorry about my lack of clarity, I often follow your threads without being able to put forward anything coherent. Doesn't mean I'm not enjoying them immensly and learning something.
  11. Donationbbarr
    Chief Justice
    Center of Contention
    Joined
    14 Jun '02
    Moves
    17381
    28 Sep '11 21:03
    Originally posted by karoly aczel
    Sorry about my lack of clarity, I often follow your threads without being able to put forward anything coherent. Doesn't mean I'm not enjoying them immensly and learning something.
    No problem. Let me think about your question for a bit. It's interesting, and I want to do it justice.
  12. Donationbbarr
    Chief Justice
    Center of Contention
    Joined
    14 Jun '02
    Moves
    17381
    29 Sep '11 00:38
    Originally posted by karoly aczel
    Particurlary the second paragraph here.

    I'm not sure of where to put the "world" after having read this because "world of people" and "world of nature" dont seem compatible.

    Even with your friend, some strange anomonlie would prolly happen if continued to make friends , who you could predict.

    For example you may have a whole other group disl ...[text shortened]... t the most unpredictable part of your "getting to know the world" problem. Agree? Or...
    There is an old problem, called "The Problem of Other Minds", that specifically concerns our knowledge of other people. If there is a mind/world split, and I can only infer about the world based on the content of my conscious experiences, then it seems it would be even harder to infer about the content of the conscious states of others. I'd have to bridge the gap to the world, and then bridge it again to get at the inner world of others. I have to take the behavior of others as my evidence for conclusions about the content of their conscious states. And it is certainly possible (pace, for instance, 'inverted spectra' thought experiments and the like) that qualitative differences in the content of the respective conscious states of different persons may yield indistinguishable behaviors, or no distinct behaviors at all. I'm not at all concerned with these issues. They are fun, as philosophical puzzles, but I find them tedious after awhile. This is one reason I changed specialties from epistemology to ethics.

    But I suspect this does not address your question fully. So, let's start here: What do you think it means to say that one person understands another?
  13. Standard memberkaroly aczel
    The Axe man
    Brisbane,QLD
    Joined
    11 Apr '09
    Moves
    102845
    29 Sep '11 00:57
    Originally posted by bbarr
    There is an old problem, called "The Problem of Other Minds", that specifically concerns our knowledge of other people. If there is a mind/world split, and I can only infer about the world based on the content of my conscious experiences, then it seems it would be even harder to infer about the content of the conscious states of others. I'd have to bridge th ...[text shortened]... let's start here: What do you think it means to say that one person understands another?
    Well, I suppose it means that you can predict their behaviour, to some degree.

    I like your line of enquiry, I think it has merit for understanding ourselves, but I do see linguistic problems, as everywhere.

    Still thinking...
  14. Hmmm . . .
    Joined
    19 Jan '04
    Moves
    22131
    29 Sep '11 04:048 edits
    Originally posted by bbarr
    There is some basic confusion in this conversation that I am having trouble pinning down. When I think I've diagnosed and addressed it, I get assurances that I'm being understood, but those assurances are then followed by the same confusions. What is going on here? Ideas?
    Wittgenstein in On Certainty? “I know that my name is N.” It seems strange to say “I know that” in, say the context of casual conversation—e.g., are you trying to reassure yourself? Nevertheless, if I cannot properly say—given, e.g., the context of philosophical discussion—“I know that my name is N”, then what can I properly say that I know?

    Nevertheless, for reasons you pointed out above, I have no logical certainty that my name is N (e.g., pace one of your examples, I might be hallucinating). No logical contradiction is involved by my conceding the possibility that I might be wrong, however improbable that is.

    I’m going all from recall here, but it seems to me that W’s arguments were centered on what it means to say “I know”—or, following the PI, what is the use of that expression, and what language game are we playing. For example, I might be playing a language game of everyday practical discourse (wanting to know, as a matter of living my days, what I can know); T might be playing the language game of (describing the rule of) some applied mathematics (e.g., statistics); B might be playing the language game of formal epistemology. None of those language games are a priori normative or privileged with regard to the word “know” or “knowledge”. Nor can one insist on a particular language game; one can say that, within the context of this language game, “to know” is used to describe this state of affairs with these entailments, etc. If we cannot agree (or we are not all able) to speak in the same language game, then nothing more can be (usefully) said.*

    My understanding of W’s point in OC is that there are certain statements—such as “My name is N” or “I did not lunch yesterday in Peking”—such that, stated with the explicit or implicit “I know”, they either properly count as knowledge (JTB), or [at least any practical?] epistemology is just undermined (in the face of, e.g., a Pyrrhonian skepticism). And yet, such statements do not meet the criteria of infallibility. (It is not impossible that I had lunch in Peking yesterday, perhaps under some weird "Manchurian Candidate" scenario.)

    This is all, needless to say, a non-formal attempt to understand the arguments, and is not offered as an argument itself.

    ________________________________________________

    * Someone asks: “How does one cook rice?”. Two chefs—one French and one Spanish—begin to quarrel over the answer (the French chef just does not understand paella!!). Each presumes that her own “language game” with regard to the terms “how” and “cook” and “rice” are normative. Suppose one were to then ask them: “How do you know?” The answer may well involve a good amount of rule-following within the context of this or that cooking game. I'm not sure that (the later) Wittgenstein would view formal epistemology as any less a game than the cooking game. The point is to describe the games, and their limits.

    In the game of everday discourse, if someone says "I know", how much "fallibility" do I assume? If I assume that they might be wrong (with, say, some notion of probability), then I might well say: "Well, you can't really know that!" What place does a discussion of JTB and fallibilism have in that language game?

    In another game, suppose someone claims to know the solution to a particular mathematical problem: in that kind of game, is not certainty, in terms of a demonstrable proof, generally expected? (E.g., "I know the solution to Fermat's last theorem." )
  15. Joined
    24 Apr '05
    Moves
    3061
    29 Sep '11 09:13
    Originally posted by twhitehead
    It just seems illogical. It also doesn't seem to agree with the lottery example, but maybe I need to go back and re-read that as I must have missed something.
    Lets say there are 10 tickets in the lottery. I hold a ticket prior to the draw. I say I 'know' I wont win. Can a fallibalist say he 'knows' he wont win? Does he know he wont win, or must he wait for the result of the draw?
    twhitehead, I believe we are probably still talking past each other. I have read through the thread pretty carefully, starting from where you first objected to bbarr's lottery argument. I think the confusion lies in one or two (or both) things. Maybe if I attempt to clarify those points, it will help the discussion.

    First, it seems clear to me that you have been treating 'fallibilism' as some empirical or descriptive thesis about how humans are prone to err in their judgments or knowledge claims. For instance, you stated the following:

    All [bbarr's lottery argument] seems to do is demonstrate the fact that a fallibalist should fully expect some of the things he claims to 'know' are in fact false. But surely that is why he calls himself a fallibalist? He knows he is fallible.

    But, no, epistemologists who are fallibilists do not call themselves fallibilists because they are aware that they are imperfect in their judgments. Again, 'fallibilism' in the sense employed in bbarr's arguments is not an empirical thesis about how persons are disposed to epistemic error; rather, it is a theoretical idea about justification, relating to condition (2), the justification condition, outlined by bbarr in his post regarding the basic analysis of knowledge. Basically, a fallibilist would call himself a fallibilist because he has considered questions surrounding justification and he thinks that S can be justified in believing P even when S's reasons for believing P are not sufficient to guarantee the truth of P (say, when S's reasons suggest it is highly probable, but not certain, that P is true). It should be clear that your construal here does not capture what divides fallibilists from infallibilists, since the descriptive claim that persons can be mistaken in what they claim to know is hardly contentious. Fallibilists and infallibilists will both readily agree with this.

    Again, bbarr's lottery argument is basically as follows. Under fallibilism, S is justified in claiming to know, for each single ticket, that this individual ticket will not win, since this is overwhelmingly probable for each single ticket (though not certain). But, S could not be justified in claiming to know that no single ticket will win, since obviously some ticket has to win based on how this lottery operates. So under fallibilism, S is justified in claiming to know each Pn; but cannot be justified in claiming to know something that is entailed by the conjunction of all these Pn. But this directly conflicts with a closure principle that holds one can justifiably claim to know that which is entailed by things she justifiably claims to know. So, either something is wrong with fallibilism (again, construed as a theoretical thesis regarding justification) or something is wrong with the described closure principle.

    Secondly, if the above does not hit on the confusion in our discussion, then I have another idea where the confusion may lie. I had mentioned to you before (top of page 12) that the following would be in line with a fallibilist construal: "I can still know that P even when I know that it is highly unlikely but still possible that P is false." I still stand by my statements there, but I do concede that these types of statements can be more confusing than they are helpful. Generally, I think these are related to what are known as "concessive knowledge attributions". A concessive knowledge attribution would be a statement that has a form like "I know that P, but it is possible that Q", where Q entails not-P. These statements can seem very awkward to many, and I also believe they are prone to be misunderstood. My concern here is that you would take a statement of the form "S knows that P, but it is possible that not-P" to mean something like "It is possible that both (1) S knows that P and (2) P is false." And then this will contradict what I was calling the truth condition (basically the condition that only true propositions can be known). This may be the reason why you cry foul when I conjoin fallibilism with the truth condition. But, this reading of the concessive knowledge attribution is not correct. The concessive knowledge attribution does not conflict with the truth condition. The concessive knowledge attribution is talking about epistemic possibility. For instance, when it says that not-P is possible (for S), it is basically reporting the fact that S's reasons for believing P are not sufficient to guarantee that P is the case. In other words, let's say you believe P on the basis of reasons or evidence that show P is highly probably but do not entail P or render P certain. Then, we could say that not-P is epistemically possible for you, since your basis for believing P does not totally exclude not-P as a possibility. Suppose further that P is in fact the case (thus satisfying the truth condition). This does not change the fact that your basis for belief is still just as described; that is, the fact that P really is the case does not change the fact that not-P is epistemically possible for you. Given all this under fallibilism, then, we could say that you know P even though not-P is possible for you. But, again, this should not be read as though it is possible both that you know P and P is false! So, concessive knowledge attributions may sound weird, but they do not conflict with the truth condition.

    I hope this helps or clarifies our discussion somewhat. 😕
Back to Top

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.I Agree