# Why the "Problem of Induction" really isn't a problem. (And why theists don't even get it right)

## What is Inductive Logic?

Gregory Lopez and Chris Smith

We can define any type of logic as a formal a priori system (axiomatic) that is usually employed in reasoning. In general, if we feed in true propositions, and follow the rules of the particular system, the logic will crank out true conclusions.

We can define 'induction" as a thought process that involves moving from particular observations of real world phenomena to general rules about all similar types of phenomena (a posteriori). We hold that these rules that we generate are probably, but not certainly, true, because such claims are not tautologies.

**Inductive logic** therefore, is a formal system that can be distinguished from deductive logic in that the premises we feed into these arguments are not categories or definitions or equalities, but observations of the real world - a posteriori world. Inductive logic therefore, is the reasoning we do every day while working in the real world - i.e. the probabilities that we deal with while making judgments about the world. We can think of it as learning from experience and

applying our prior experiences to new, but similar, situations.

**History**

Inductive logic is basically a form of probability. While human beings have used intuitive forms of inductive reasoning all throughout history, probably theory was first formalized in 1654 by the mathematicians Pascal and Fermat - during their correspondence over the game of dice! In their attempts to understand the game, they created a set of frequencies - or possibilities that described the likelihood for particular rolls of the dice. In doing this, they accidentally set down the basics of probability theory.

It was only a short time later, in 1748, that someone noticed a problem in probability theory - that it included the presumption that the future would be just like the past, yet this assumption could not in of itself provide a sufficient condition for justifying induction, seeing as there is no valid logical connection between a collection of past experiences and what will be the case in the future. Hume's Inquiry Concerning Human Understanding" is noted, even today, for pointing out this problem - the "problem of induction". However, few realize that a solution to the problem appeared only a few years later: In 1763, Thomas Bayes's presented a theorm that unaware to him, could be used to provide a logical connection between the past and the future in order to account for induction. More recently, Kolmogorv (1933) axiomized probability theory, which means that he gave probability theory an axiomatic foundation. Induction, therefore, while a probabilistic enterprise, is founded on a deduced system:

The three axioms of formalized probability theory:

1. The probability of any proposition falls between 1 and 0.

2. Certian propositions have a probability of 1

3. When there is no overlap, P(P or Q) = P(P) + P(Q)

and the definition of conditional probability:

P(P/Q) = P(P & Q)/P(Q)

If you accept these axioms, you must accept Bayes Theorem. It follows logically from the axioms.

These are the key points to the history of induction as far as the formal origins and formal supports for induction. I will cover these points in more detail below. But first, let's look at the different types of inductive logic.

** Types of Inductive Logic**

Let's do a brief review of some kinds of Inductive Logic

**Argument from analogy **. This occurs when we compare two phenomena based on traits that they share. For example, we might hold that Object 'A' shares the traits w, x and y, with with object 'B,' therefore, object A might also share other qualities of object B.

**Statistical syllogism**. This inductive logic is similar to the argument from analogy. The form of the logic follows: X% of "A" are "B", so the probability of "A' being "B" is X%

Example: 3% of smokers eventually contract lung cancer. John Doe is a smoker, therefore, he has a 3% chance of contracting lung cancer.

**Generalization from sample to population ** The best example of this inductive logic would be a poll. Polls rely on random samples that are representative of a group by virture of their random selection (i.e. the fact that every person had the same chance of being chosen for the sample).

On my website, I will also discuss John Stuart Mill's Method of Causality. For now, let's return to the aformentioned "problem of induction" and take a deeper look both at the problem of induction, and some solutions for this problem.

**The problem of induction**

You've probably heard about Hume's famous 'problem of induction"

How do we know that the future will be like the past?

Or... more comedically

How do we know that the future will continue to be as it always has been?!

Consider the following example: we observe two billiard balls interact. From this, we observe that they appear to obey a physical law that could be presented in the formula: F=ma - Force = Mass X acceleration. From this observation, we then generate a general law of force. However, the problem then arises: how can we hold that this law will really apply to all similar situations in the future? How can we justify that this will always be the case?

If we argue that "we can know this, because the balls have always acted this way in the past" we are not really answering the question for the question asks how how we know that the balls will act this way in the future. Of course, we can then insist that the future will be just like the past, but this is the very question under consideration! We might then insist that there is a uniformity of nature that allows us to deduce our conclusion. But, how do we know that nature is uniform? Because in the past it always seemed so? Again, we are simply assuming what we seek to prove.

So, it turns out that this defense is circular... we assume what we seek to justify in the first place, that the past will be like the future. So this argument fails to provide a justification for induction.

But this in itself is not the whole story, in fact, if we stop here, we get the story all wrong. You see, the 'uniformity of nature' is in fact a necessary condition for induction but it could never be a sufficient justification of inductive inference anyway. The actual problem of induction is more than this: it is the claim that there is no valid logical "connection" between a collection of past experiences and what will be the case in the future. The classic "white swans" example serves: the fact that every swan you've seen in the past was white means simply that: every swan you've seen has been white. There is no logical "therefore" to bridge the connection "all the swans I've seen are white" to "all swans are white" or "the next swan I encounter will be white".

So, yes induction presupposes the uniformity of nature, but while this is a necessary condition for induction, the UN is not sufficient to justify inductive inferences epistemologically. So, any attempt to solve the problem by shoring up the 'uniformity of nature' will never work to begin with. When the next swan turns out to be black, it shows your statement "all swans are white" had no actual "knowledge" content. What you've done is presupposed nature to be uniform, but not in fact justified any particular inductive inference you may wish to make.

So,solving the 'problem' of induction is more than just trying to find a way out of the 'circle' of uniformity of nature/justifying induction. There is a problem that needs a solution. Interestingly, many critics seem to believe that the story ends here - that there simply is a problem, and that all solutions are merely circular. But this is untrue. There are responses to the problem.

Since it was Hume who first uncovered this problem, let's begin by looking at his response:

**David Hume's Response: This assumption is a 'habit'**

Hume's answer was that we had little choice but to assume that the future will be like the past..... in other words, it was a habit born of necessity - we'd starve without it! And, given that there was nothing contradictory, logically impossible or *irrational* to holding to the assumption, this utility of induction was seen to support the assumption on a **pragmatic basis.** This is a key point lost upon many people: there is nothing illogical or irrational about assuming that induction works, nor are there any rational grounds for holding that 'induction is untrustworthy'. The fact that I cannot be absolutely certain that the sun will rise tomorrow does

not give me any justification in holding that it will not rise tomorrow! This error is called the fallacy of arguing from inductive uncertainty.

But merely holding that an assumption is 'not irrational' is not a satisfying enough answer for many. Hume himself stated: "As an agent I am satisfied but as a philosopher I am still curious." So let's continue our search for an answer to the problem.

** What is the Basis for Inductive Logic? - An examination of Probability Theory**

Curiously, the axiomatic foundations for inductive logic only tell us how a probability behaves, not what it is. So let's begin our examination by first defining what we actually mean by saying the word "probability".

Three common definitions:

**Classical** - the classical definition describes probability as a set of possible occurrences where all possibilities are 'equally likely' - but a problem arises from this definition. For example, how do you define "possibility" in a univocal manner? Is an outcome 50/50 (either it happens or it does not) or is an outcome actually 1/10, 1/100? In many cases there are possible reasons for each choice. So let's look at another definition.

**Frequentist ** - the 'frequency' is the probability for a given event, that is determined as you approach an infinite number of trials. For example, as with the central limit theorm, you could learn what a probability might be for the roll of a 7 on a pair of dice, after rolling them for a large number of trials. This is the most popular definition, including in science and medicine. This view is backed up by axiomatic deduced probability theory (based on infinite trials (like coin flips)) the **law of large numbers. ** The frequency converges to the probability when we reach infinity. But there are problems here as well: does the limit actually exist? Do we ever really know a probability, since we can't do things infinitely? Also, this method gives us very counterintuitive interpretations. For example, consider a 95% confidence

interval - often this is read to mean that 1 out of every 20 such studies is in error. In

actuality, what this means is that if the experiment were repeated infinitely, you'd get the real mean 95% of the time. This is hardly what people think when they read a poll.

Finally, we can't apply this method to singular cases. 'One case probabilities' are "nonsense" to the frequentist. How do we work out the probability of the meteor that hit the earth to kill the dinosaurs?

We can't repeat this experiment infinitely! We can't repeat it once! We see the same problem with creationist arguments for our universe that attempt to assign a probability to the universe.

**Subjective probability** - Here, probability is held to be the degree of belief in an event, fact, or proposition. Look at the benefits of this model. 1) We can more carefully assign a probability to a given situation. 2) We can apply this to method 'one case events'. 3) This manner of defining probability gives us *very natural and intuitive * interpretations of events that fits with our use of the word "probably", circumventing the problems of frequentism.

MOST IMPORTANTLY: Allows us to rationally adjust our beliefs "inductively" by use of probability theory, which is a mathematically **deduced theory**, so we can latch on our beliefs onto a deductive axiomatic system. Here then, for many, is the solution to Hume's "problem" - induction is no longer merely "not irrational', but instead, can be seen as resting upon a firm deductive foundation.

**How does it work?**

How do you get a 'number' or probability, for subjective probability? Let's use the concept of wagering.... What would you consider to be a fair bet for a particular outcome? Is X more probable then getting Y heads in a row in your view? In brief, this is how the method works.

Subjective probability and frequency are linked by the "Principal Principle" (David Lewis) or Ian Hacking's "Frequency Principle" (his book cover appears at top). Subjective probablity is justified by a reductio argument: if your subjective probabilities don't match the frequency, and you know nothing else, you have no grounds for your belief.

A question may arise: How can we reason anything if probability's subjective? Well, it is true that you can just choose any starting ground you desire, HOWEVER, your choice must follow laws of probability, or else you're susceptible to 'Dutch Book Arguments' - what this means is that if your degrees of belief **don't follow the laws of probability**, you are being inconsistent and incoherent. You can choose to believe what you want, but at the risk of being incoherent. The beauty of this method is that a starting point is not necessarily very important: given differing

starting probabilities, based on different subjective evaluations, two very different people who are shown enough of the same evidence will have their probabilities converge to the same value (a LAW OF LARGE NUMBERS) by probability theory - beliefs will converge to a similar value!

Being a subjectivist who wants to use probability as a basis of induction leads us to focus on a certain way of doing things using, Bayes' Theorm

** BAYES' THEOREM **

The simplest form of Bayes' Theorem:

where:

H is is the *hypothesis*. This is a falsifiable claim you have about some phenomena in the real world

E is the *evidence* This it the reason or justification you have for holding to the

hypothesis. It is your grounds.

P(E|H) is called the * likelihood *: it is also the probability of E given H. In other

words, it is the probability that the evidence would occur if the hypothesis were true.

P(H) is called the *prior*, or prior probability of H. It is the probability of the

hypothesis being true without taking additional evidence into consideration. In other words, it is an unconditional probability. When I call something, "the prior" without qualification, I mean *this *probability.

P(E) is called the *prior *, or prior probability of the evidence E. It is the probability of E occurring regardless of H being true. This probability can be broken down further into the *partition *, as explained below.

The denominator of Eqn. 1 can be broken down as:

where H is the compliment of H, AKA not-H, and S is the sum over all independent hypotheses. This is sometimes called the partition. The top form is used when one is only considering whether a hypothesis H is true or false. The bottom form is more general, and holds for several independent hypotheses.

Plugging these into Eqn. 1 yields either:

which is useful when considering one hypothesis, being either true or false - this denominator of the right side of the equation multiplies the probability of the hypothesis being true against the probability of the hypothesis being false.

or it yeilds:

which is useful when considering how some evidence supports several independent hypotheses.

This, in a nutshell, is a possible foundation for Inductive logic. For more on this concept, try Wikipedia's entry on Bayesian Inference

Some notes on Bayes' himself:

Rev. Bayes may have (but not definitely) disagreed with "subjective probability". He derived his equation in order to answer a weird problem, which is briefly (and IIRC - no resources with me right now) as follows: you have a pool table of a known size. You draw a line across it parallel to one of the edges (I forget if it's the long or short edge). But you don't know where along the pool table the line's drawn. Now, you place a billiard ball on the table "at random" (equal probabilities of it being anywhere on the table), and you get a yes or no answer to the following question each time you do it: "is the ball to the left of the line?". Repeat this process a few times. With this problem, Bayes derived his equation and used it to find the probability that the

line is drawn at distance X from one side of the table: i.e. the probability that the line is X away from one side of the table.

So, whiles Bayes' theorm can be called upon to solve the problem of induction, Bayes wasn't really concerned with induction. He laid the mathematical foundations, however, for it to be "solved" (many people still today say that Bayesianism isn't really a solution, but a circumvention, of the problem of induction - a very technical point, however. And some object to Bayesianism altogether). The mathematician **Pierre Laplace** was the one who took up subjective probability and ran with it: he calculated the probability of the mass of a planet with it, and even calculated the probability that the sun would in fact rise tomorrow. There were, however, fatal flaws in his argument which led subjective probability to be all but abandoned. The frequentists took up the ball, and ran with it, until the mathematician Bruno De

Finetti picked up Laplace's torch, leading to "Bayesianism" almost as we know it today.

Conclusion

Lopez believes that both classical and Bayesian statistics answer the problem of induction, as they are both founded on a priori deductive systems. Thus, he ultimately believes that the problem of induction is only a problem if one wishes to find certainty in a belief, and nothing more. It completely discounts degrees of belief.

Degrees of belief is most directly addressed by the Bayesian view. However, the frequentist interpretation still has some power against the problem of induction in my view as well.

Two Further notes:

As already stated above, Christian Presuppositionalists often state the Problem of Induction incorrectly, confusing it with the assumption of a Uniformity of Nature, an error made even more comical when one considers that there solution is an assumption of the Uniformity of "God"!

However, they commit yet another serious blunder: it is a mistake to hold that a failure to provide an adequate justification for induction leaves us without any grounds to rely on induction other than 'faith': The fact one cannot prove something to be correct doesn't imply that one cannot know that the system is correct. A child is unable to prove his name, does this mean he does not know it? Knowledge and proof are two different philosophical concepts. The Problem of Induction relates to philosophical justification.

In short - no matter how one ultimately slices it, the mathematics of probability and statistics ultimately does away with the problem of induction - Bayesian or not.

**More Comments on the Problem **

Quite frequently I encounter people who equate lack of certitude with giant inferential leaps. Science deals with probabilities, often quite high probabilities, but not certitudes. It is one of the strengths of the scientific method as it acknowledges a chance of error(while maintaining rigorous standards to establish provisional acceptance of propositions). "It is a mistake to believe that a science consists in nothing but conclusively proved propositions, and it is unjust to demand that it should It is a demand only made by those who feel a craving for authority in some form and a need to replace the religious catechism by something else, even if it be a scientific one. Science in its catechism has but few apodictic precepts; it consists mainly of statements which it has developed to varying degrees of probability. The capacity to be content with these approximations to certainty and the ability to carry on constructive work despite the lack of final confirmation are actually a mark of the scientific habit of mind." -- Sigmund Freud

Usually when people talk about how induction is "flawed," they mean that it's not

truth-preserving like deduction. You don't get certainty from true premises. I.e.: Holding an inductive claim as if it were a series of equivalencies is an error.

I think that the problem of induction is only a problem because: a) Some people look for certainty in it, and b)historically, the problem arose before probability theory was mature. If you don't look for certainty, and you know about modern probability and statistics, the problem of induction is not a problem at all. The whole (deductively-created) theory of probability and statistics is dedicated to telling us something about "populations" from "samples." It's *made * for induction.

Another possible solution: Can we assume that nature has a Uniformity?

As already mentioned previously, the assumption of a uniformity of nature is a necessary but not a sufficient condition for building inferences from the past to the future. So the assumption is not only circular, it fails to provide a justification for such inferences. In addition, Howson & Urbach point out, assuming a uniformity of nature is doubly a nonsolution, since it's a fairly empty assumption. For how is nature uniform? And what, really, are we talking about. What would really be needed are millions upon millions of uniformity assumptions for each item under discussion. We'd need one for the melting temperature of water, of iron, of nickel, etc, etc. For example "block of ice x will melt at 0 Celsius;" for these types of assumptions actually say something. Furthermore, the uniformity of nature assumptions fall prey to meta-uniformity issues - for how are we to know that nature will always be uniform? Well, we have to assume that too. And how do we know that the uniformity of nature is uniform? Ad infinitum. So, to "solve" the philosphical problem of justifying induction by uniformity of nature solutions doesn't really work.

- Printer-friendly version
- Login or register to post comments

## Ummm - yeah badly put, me.

Atheistextremist wrote:I was thinking along the lines of people who run stop signs/including me in the past can get booked/have been booked. I make the decision to stop on the momentary assumption I will get booked if I run any stop.

I think I've manufactured part-to-whole induction - finally - and in the process completely changed the subject. Let's just move on, shall we.

"Experiments are the only means of knowledge at our disposal. The rest is poetry, imagination." Max Planck

## It's just as well

It's just as well considering none of you can explain or substantiate the original post anyway. Where is todangst anyway?

-----

"The church at the time of Galileo was much more faithful to reason than Galileo himself, and also took into consideration the ethical and social consequences of Galileo's doctrine. Its verdict against Galileo was rational and just, and revisionism can be legitimized solely for motives of political opportunism." -Paul Feyerabend

"Let me just anticipate that nobody to date has found a demarcation criteria according to which Darwin can be described as scientific, but this is exactly what we are looking for." -Imre Lakatos

## You mean except for the ones

You mean except for the ones who have done so but gave you answers you didn't like?

"I do this real moron thing, and it's called thinking. And apparently I'm not a very good American because I like to form my own opinions."

— George Carlin

## Try try try try!!!

XaosPeru wrote:Did you ever read the date on the original post from Tadangst? 2 and a half years ago. And also did you ever read thebible; I think NOT.

"Very funny Scotty; now beam down our clothes."

VEGETARIAN: Ancient Hindu word for "lousy hunter"

If man was formed from dirt, why is there still dirt?

## Hi Xaos

XaosPeru wrote:Do you believe that generalising your way to a proposition is useful at any time or can direct us towards a possible true conclusion? Or is induction simply worthless in your opinion?

"Experiments are the only means of knowledge at our disposal. The rest is poetry, imagination." Max Planck

## XaosPeru wrote:Is it really

Oh, you responded to my example. I didn't even notice, sorry.

XaosPeru wrote:But, that's just it. I agree with you.

We never "prove" that the hypothesis is true, that Christina is a man. The premises don't necessarily lead to the conclusion that Christina is a man.

We simply gradually increase the, for lack of a better term, pragmatic validity of the hypothesis. I mean, suppose that we perform 1,000 more experiments with Christina to determine whether or not Christina is a man, and on every one of those experiments, we get exactly the results that we would get if Christina were a man. We still don't "know" that Christina is a man because perhaps there is some other entity that would also provide the same results to those 1,000 experiments.

But, if the question of whether or not Christina is a man becomes a issue for some real world problem, it is simply practical to assume that Christina is a man, because assuming that it is true is useful.

Todansst wrote:Our revels now are ended. These our actors, | As I foretold you, were all spirits, and | Are melted into air, into thin air; | And, like the baseless fabric of this vision, | The cloud-capped towers, the gorgeous palaces, | The solemn temples, the great globe itself, - Yea, all which it inherit, shall dissolve, | And, like this insubstantial pageant faded, | Leave not a rack behind. We are such stuff | As dreams are made on, and our little life | Is rounded with a sleep. - Shakespeare## XaosPeru wrote:It's just as

XaosPeru wrote:Todangst hasn't posted on RRS in quite a while.

Our revels now are ended. These our actors, | As I foretold you, were all spirits, and | Are melted into air, into thin air; | And, like the baseless fabric of this vision, | The cloud-capped towers, the gorgeous palaces, | The solemn temples, the great globe itself, - Yea, all which it inherit, shall dissolve, | And, like this insubstantial pageant faded, | Leave not a rack behind. We are such stuff | As dreams are made on, and our little life | Is rounded with a sleep. - Shakespeare## Atheistextremist

Atheistextremist wrote:Imagine we live in an area where a lot of ravens are and we see three or four black ravens. Then you say, "Hey, it seems like all ravens are black." You might be right. That forms the starting place for investigation if we are so inclined.

My problem starts when someone collects a thousand black animals and observes that some of them are ravens and feels that the theory has been supported (either inductively or probabilistically). Obviously by collecting only BLACK animals you will never falsify the theory so finding additional black ravens using this method means nothing.

A lesser objection arises when someone collects a thousand green animals and observes that since none of them are ravens that the theory has been supported. Although true, someone else who is in possession of a white raven would find the 1000 green animals support for his theory that all ravens are white.

This question (known as the Raven paradox) is not without suggested resolutions. Wikipedia is filled with them. A quote from one of them is, "This result is completely devastating to the inductive interpretation of the calculus of probability. All probabilistic support is purely deductive: that part of a hypothesis that is not deductively entailed by the evidence is always strongly countersupported by the evidence ... There is such a thing as probabilistic support; there might even be such a thing as inductive support (though we hardly think so). But the calculus of probability reveals that probabilistic support cannot be inductive support." Now personally I think the above statement violates the NPOV policy of Wikipedia, but setting that aside it seems to indicate that Popper had already considered the probabilistic angle and discarded it (although I don't understand the explanation included for why). However, I've begun watching statistics and probability videos from Khan Academy so we'll see if I can't understand it better after a few more sessions.

-----

"The church at the time of Galileo was much more faithful to reason than Galileo himself, and also took into consideration the ethical and social consequences of Galileo's doctrine. Its verdict against Galileo was rational and just, and revisionism can be legitimized solely for motives of political opportunism." -Paul Feyerabend

"Let me just anticipate that nobody to date has found a demarcation criteria according to which Darwin can be described as scientific, but this is exactly what we are looking for." -Imre Lakatos

## XaosPeru wrote:Although

XaosPeru wrote:Alright, I'm going to jump in here. But for disclaimer I am neither a scientist nor mathematician (despite my mothers best efforts) but I know cards as well as anyone. And your calculation is inductive because you are using incomplete information for the calculation. You only know your own cards so you are calculating the odds that you LHO has the King. How is that any different than someone with incomplete information calculating the odds of there being a white raven? Even if your LHO only had 1 unknown card that card could still be the King. Even if you saw every Raven in the world except one, that one could be white.

Now with the card example, if you were cheating with a mirror and could see all of you LHO cards except one and none of them were the King, your best play would be the Ace because the odds of your LHO holding the King are substantially lower than the odds of your RHO holding the King. It is still possible that your LHO has the King, just unlikely. So when your LHO pulls out the King you experience the weakness of induction. But given the fact that your information will ALWAYS be incomplete, how do you determine which card to play without using induction? Flipping a coin in that situation would result in far more losses, I am willing to bet a lot of money on that. If your LHO has one unknown card while your RHO has seven it is reasonable to believe your RHO probably has the King until proven otherwise.

Similarly, with the ravens, if you saw a million ravens and they were all black it would be reasonable to bet that the next one you see will also be black. If you and I were sitting on a porch watching ravens fly by and I predicted the next one would be black because the last thousand we saw were black, while you were flipping a coin to predict whether it would be black or white, my prediction would be right far more often. That doesn't mean there is no such thing as an albino raven, but until I start seeing a lot of albino ravens flying past my porch, my money is best put on the next raven being black. So I will generalize that ravens are black, until some evidence is presented that there is a white raven. I don't think anyone on this site would claim that it is impossible for there to ever be a white raven, or even a flame orange raven but every new black raven you see makes it less likely that the next one will be some other color. There simply isn't a reason to believe in one until it has been seen since it would be impractical to attempt to see every raven in the world.

You seem to be arguing the induction is useless since it can sometimes lead to incorrect conclusions. Do you believe it is possible for us to have complete information in most circumstances? I believe that in most situations we can only have partial information at best, so we have to draw on that to come to a conclusion. Drawing on the information you know will provide the correct answer more often than flipping a coin. If you don't believe me, lets play some cards.

PS knocking on the door is not zero risk. You could hurt yourself knocking on the door. You could get a splinter, there could be bees nested that you disturb, the door could be old and weak causing your hand to go through it, etc. So while MOST of the time you get no injury and induction would tell you that your risk in knocking on the door is minimal, it is perfectly possible to be injured. (I knock on a lot of doors in my job and have suffered injury from the bees.) Once again, you are using induction while arguing against induction.

I just usually go with my own taste. If I like something, and it happens to be against the law, well, then I might have a problem.- Hunter S. Thompson

## XaosPeru wrote:Although

deleted

I just usually go with my own taste. If I like something, and it happens to be against the law, well, then I might have a problem.- Hunter S. Thompson

## Beyond Saving wrote:XaosPeru

Beyond Saving wrote:Ahhh - a fellow card player. What's your game? Do you play any bridge? I'm assuming so, so I'll stick with that for my illustrations, but I assume a lot of the same points could apply to poker or any other game.

I'm not interested in arguing over semantics, but I don't believe that making a calculation of the odds is necessarily inductive. I remember once I was watching someone play and they took a finesse. When it failed, she turned to me and said, "They tell me that the odds are 50% but I tell you, that it works less than half the time."

As my calculations above showed, I tend to agree with her. The difference is, however, that I calculated the results based on the information I had in front of me in that hand. She, on the other hand, applied the experience she had with playing many bridge hands in order to come to basically the same conclusion. So I'm not saying that experience can't lead you to the same conclusion as logic.

What I'm saying is the solution to the problem is not to play 1,000 hands and see how it works out but rather to start with initial assumptions and deduce the solution. Now when you're playing at the table you may decide to ignore the odds because you're good at reading your opponents. For example, a friend of mine was playing, and after two passes he had a worthless hand but decided to bid because he could see his LHO was gripping his cards so tightly that he could see his fingers turning white. It worked out very well for him.

The point I was trying to make is that in most cases the decision of whether to take the finesse or not demands entirely on the scoring method in use. If the finesse succeeds and it results in an overtrick (30 points more) whereas when the finesse fails you might lose the contract (a loss of some 670 points) then you just don't take the finesse regardless of the odds. This type of analysis is called "game theory."

---------------------

As for the raven situation, you're basing your answer about the likelihood of the next raven being black because you already know a lot about ravens and the land surface of the earth has been extensively explored. Would you answer be the same if we were visiting an alien world and we were seeing white unicorns and trying to guess whether or not all unicorns (or the next one) would be white?

Yesterday my wife went to the bakery we have always gone to in order to buy bread. We have never had problems with the bread there. Nevertheless she decided not to buy bread there. She saw a cat playing with a mouse it had apparently caught on the premises and the proprietor looking on in amusement. Based on that, she decided to go to the other bakery and buy from them. So she ignored the hundreds of good experiences with the bakery because after seeing the mouse she figured, "Better safe than sorry."

-----

"The church at the time of Galileo was much more faithful to reason than Galileo himself, and also took into consideration the ethical and social consequences of Galileo's doctrine. Its verdict against Galileo was rational and just, and revisionism can be legitimized solely for motives of political opportunism." -Paul Feyerabend

"Let me just anticipate that nobody to date has found a demarcation criteria according to which Darwin can be described as scientific, but this is exactly what we are looking for." -Imre Lakatos

## XaosPeru wrote: Ahhh - a

XaosPeru wrote:Mostly I play poker and gin. I'll play bridge, euchre, spades or hearts casually. So since poker and gin tend to be far more geared towards playing the player intuition is far more important. Although, as you pointed out with the gentleman gripping the cards, a lot more goes into the decision than simply pure mathematical statistics. You can be a good card player just by knowing all the math and consistently making correct decisions. To be a great player, you have to rely on inductive conclusions drawn from your opponents playing habits. The scoring system does have a huge effect on correct play. This can most easily be seen in the difference between limiit hold'em and no-limit holdem where the games are identical except in the amount allowed to be bet. In no-limit, I can make it statistically incorrect for you to call with any drawing hand (that doesn't mean you shouldn't call), in limit it is correct to call with a wide variety of hands.

XaosPeru wrote:I would have no reason not to expect the next one to be white. So if we were sitting on our spaceship and we were in a competition to determine whether the next unicorn would be white or black I would continue to guess white until I saw a black one. If 100 white ones came in a row and then a black one, I would still guess white. I would not be so bold as to claim with certainty that ALL unicorns were white. Nor do I claim with certainty that ALL ravens are black. However, for each unicorn I see it is that much less likely that the next one is black. If we saw 100 white unicorns in a row I know that x-100/x unicorns are white x being the total unicorn population. So as x-y (y being the number of white unicorns we see) approaches 0 I can be more certain that all unicorns are white. My interpretation of the OP is the attempt to quantify that certainty given that we know we have potentially incomplete information.

XaosPeru wrote:So she is using inductive reasoning. She saw one mouse and based on the proprietor's reaction concluded that it may have happened before and might happen again. I assume she is basing her conclusion on what she would expect a proprietor to do when they see a mouse. Perhaps she expects them to freak out or at least act surprised. Without induction she has no reason to assume that it has ever happened before or will ever happen again. That is how induction works. As you get more information you can adjust your conclusion. If that information is radical the amount of change in the conclusion can be radical. If you see one white raven that would change your conclusion that all ravens are black to most ravens are black. So what. You have to make decisions with the information you have available. What other way is there to get through life?

Also the argument could be made that your wifes decision is irrational. The proprietor had a cat that killed the mouse. So if your concern is mice contaminating your bread wouldn't it be better to go to the baker with a mechanism for killing the mice compared to one who didn't have any? The next baker might have mice as well but no cat to kill them. But that is kind of a separate issue. The bottom line is that seeing the cat with the mouse was her equivalent to seeing a black unicorn. It changed everything for her.

I just usually go with my own taste. If I like something, and it happens to be against the law, well, then I might have a problem.- Hunter S. Thompson

## Beyond Saving wrote:XaosPeru

Beyond Saving wrote:Response composed off-line. I mostly play bridge nowadays as I have no one to play against (except my computer), but I enjoy a good game of cribbage and hearts as well. I prefer bridge because the element of cooperation with a partner adds to my personal enjoyment. As for my wife, my argument would be the latter one you expressed: That my wife is behaving irrationally. Really there's no realistic expectation that either of the two other bakeries in the area are either better or worse than the one we currently go to. For all we know the other bakeries also have cats to control the mouse population or they use poison, in which case there could be a dead mouse lying between the display case where the bread is shown and the wall. The only thing I know for certain is that the bakery I go to is the closest. As for the raven situation, some people might assume after seeing 10 black ravens that all ravens are black. Really I assume that all ravens are not black and no number of raven sightings will convince me that all ravens are black. Perhaps finding a white raven would surprise them greatly, but for me it would just confirm what I'd always believed. Here's an induction problem for you: Using induction I assume that all people are mortal. Using induction I also assume that my wife will be alive and well at home like she has been every previous time that I've arrived home. The problem is that induction 1) leads me to believe that every day increases the probability that my wife will be dead. Since she's mortal each day brings me closer to that inevitable day when she will die. The second induction makes me believe that my wife is more likely to be alive and well considering I have one more data point that confirms this assumption. So basically I have two inductive assumptions that seem equally valid, but contradict one another. How do I resolve this apparent contradiction?

## XaosPeru wrote:Here's an

XaosPeru wrote:You keep making your problems either/or. When in reality, that particular problem has many inputs. First, in what kind of health is your wife? Second, how old is your wife? Third, what is the likelihood of a natural disaster in your area? Fourth, what kind of shape is your home in? And so on. Really, if my spouse (husband in my case) has just had a heart attack, is living in an area prone to hurricanes and there is one headed our way, is in his 80s, and our house is practically falling down from the last hurricane, the probability of his death just keeps rising. A young man, no health problems, no natural disasters, in a house that is well built and maintained - years to go before he dies.

Stop over simplifying and maybe you can begin to see how induction really works.

--

I feel so much better since I stopped trying to believe."We are entitled to our own opinions. We're not entitled to our own facts"- Al Franken

"If death isn't sweet oblivion, I will be severely disappointed" - Ruth M.

## Induction is simply what we

Induction is simply what we all do when we have insufficient information to make a definitive conclusion, but wish to, or need to, make an decision based on what information we do have. And that is much of the time.

The "problem" only arises when someone doesn't understand how to handle the uncertainty or risks attached to such decisions, or ignores it.

The philosophers who agonize or argue over this "problem" have contributed to the dumbing down and confusion of humanity.

Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality

"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." -Sam HarrisThe path to Truth lies via careful study of reality, not the dreams of our fallible minds- meFrom the sublime to the ridiculous: Science -> Philosophy -> Theology

## XaosPeru wrote:As for the

XaosPeru wrote:And if all the ravens are indeed black then you are just as wrong. You are just never surprised because you continue to believe in something that doesn't exist. So is it that you simply prefer to be wrong in a way that you won't be surprised? And can always claim you are right when you are not? Being wrong isn't that bad.

XaosPeru wrote:Your problem is that you pretend to see these two things as conflicting when they are not. If your wife is in good health the odds of her being well is better than her being dead. On any given day she will probably still be alive. That doesn't change the fact that eventually she will be dead. Simply because you do not know the exact day she will die doesn't change the fact that you are one day closer every day and the odds of her living is diminishing. You seem to be ignoring the fact that if the odds of your wife surviving the day are 99.99% she can still die and that does not prove induction is useless. Induction freely admits that there are outliers, hence why it doesn't make absolute claims like ALL ravens are black or your wife WILL be alive today. It might say there is a 97% chance all ravens are black or a 99% chance your wife will survive today. I thought that was the whole point of the OP to calculate that uncertainty inherent in induction.

## cj wrote:XaosPeru

cj wrote:This, of course, presupposes that induction really DOES work. However, as I've pointed out, many of us make decisions every day without using induction and without having full knowledge of the facts. Putting on your seat belt, for example, is not an act of induction. Even if you've never had an accident in your life, an accident is at least theoretically possible and putting on a seat belt mitigates the negative consequences of that event. I suppose I should just adopt David Hume's stance - that humans are illogical, that there's no rational basis for using induction, but we do it anyway so screw logic and all that.

## XaosPeru wrote:This, of

XaosPeru wrote:Talking to you is often like banging my head on a brick wall. I don't know why I do it, but it is bound to feel better when I stop.

You are over simplifying again. We do NOT know a seat belt will mitigate the negative consequences of an accident. The

probabilityis that it will. It may break - the buckle may fail - something large and heavy may come through the windshield and it won't matter if you are wearing the seat belt or not. In some places, not wearing a seat belt allows the police to pull you overfor no other reason, fine you for not wearing the seat belt and write tickets for the brake light you have out or take you to jail for the marijuana under your seat.And if you don't have the full facts to make a decision, and you must make a decision, you induce the probable best decision given what facts you do have.

"However, as I've pointed out, many of us make decisions every day without using induction and without having full knowledge of the facts."

This could be a useful off the cuff definition of

inductionif you remove the words 'without using induction'. Better to say, without using formal Bayesian statistics to back up our decisions.--

I feel so much better since I stopped trying to believe."We are entitled to our own opinions. We're not entitled to our own facts"- Al Franken

"If death isn't sweet oblivion, I will be severely disappointed" - Ruth M.

## David Hume's stance was that

David Hume's stance was that there was NO path to

certainknowledge of 'facts', but that induction and empiricism were our only path to some degree of practical knowledge. His other category of 'knowledge' was of 'relations' whichcouldbe established with confidence viadeduction, but such truths were of the conditional form IF A THEN B.It was not that people were illogical, but that knowledge of 'facts' was inevitably uncertain.

And

cj, I fully agree, any time we make decisions on incomplete data we are using some form of induction.Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality

"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." -Sam HarrisThe path to Truth lies via careful study of reality, not the dreams of our fallible minds- meFrom the sublime to the ridiculous: Science -> Philosophy -> Theology

## BobSpence1 wrote:And cj, I

BobSpence1 wrote:Oh, good. Thinking about it I was wondering if I misstated - glad to know I didn't. Trying to answer all the dunderheads around (why aren't they out christmas shopping?) is getting confusing.

--

I feel so much better since I stopped trying to believe."We are entitled to our own opinions. We're not entitled to our own facts"- Al Franken

"If death isn't sweet oblivion, I will be severely disappointed" - Ruth M.

2