Transcript Probability
Probability
1
Random Experiment
Say I take a coin and flip it in the air and let it fall on the
ground.
I might be interested in which side of the coin is face up. The
most basic outcomes on the coin are heads or tails.
A random experiment is an action or process that leads to
one of several possible outcomes. The coin flip is a random
experiment.
Another random experiment is the time it take to assemble a
computer.
A sample space of a random experiment is a list of all
possible outcomes of the experiment. The outcomes must
be (collectively) exhaustive and mutually exclusive.
2
Sample Space and Probability
When the sample space is considered probabilities can be
assigned to each outcome and the probabilities must follow
two rules:
1) The probability of any outcome must be somewhere
between 0 and 1 or equal to either 0 or 1,
2) The sum of the probabilities of all the outcomes must
equal 1.
3
Mutually Exclusive and
Collectively Exhaustive
Two outcomes are mutually exclusive if both of the outcomes
can not occur simultaneously.
A set of outcomes is collectively exhaustive if one of the
events must occur.
When a coin is flipped heads and tails are mutually exclusive
because only one of the values can occur.
When a coin is flipped heads and tails are the only outcomes
and one must occur so heads and tails are a set of
collectively exhaustive events.
4
Types of Probability
There are three types of probabilities: classical or a priori
probability, empirical probability (the relative frequency
method) and subjective probability.
A priori probability (where the phrase is italicized and you
start out by saying the same thing you say when a doctor
puts a popsicle stick in your mouth and then you say pre or
e altogether) is a situation where probability is assigned
based on prior knowledge of the process involved.
Examples of this are that we can assign probability in card
games, coin flipping, and die tossing. Other examples will
be seen in later chapters. This is sometimes called the
classical method of assigning probability.
5
Types of Probability
The empirical approach to assigning probability is used when
data is available about the past history of the experiment. The
probability of an outcome is the relative frequency of the
outcome. You can form this probability by taking the ratio of the
number of times the outcome came up over the total number of
times of the experiment. As an example, if out of 100 sales
calls, you had 37 sales, the probability of a sale would be
37/100 = .37.
The subjective approach to assigning probability is used when
the other two can not be used. You and a friend may consider
the experiment that Nebraska will win the national
championship in football this year. The outcomes are either
they will win or they will not win. You may say P(win) = .4
(meaning the probability of a win is .4) Thus you also say P(not
win) = .6. Your friend might say the probabilities should be .7,6
.3, respectively. Your differences represent your subjectivity.
An event
Each possible outcome of a random experiment is referred
to as a simple event.
So, in my tossing a coin a simple event is what happens
with the coin – heads or tails is face up.
Events in general may be more complex than simple
events.
An event is a collect or set of one or more simple events in
a sample space.
The probability of an event is the sum of the probabilities of
the simple events that constitute the event.
7
Business Topics
So, in my example here I had a coin flip. But this is a class
in business. We might talk about did a sale occur or not.
Maybe we will be interested in the amount of sales. The
universe of business examples is large.
We will typically talk about one or two or three variables and
each variable has basic outcomes. Remember the basic
outcomes are events. But events can be more complex. As
an example of a more complex event we might be interested
in the event when a die is rolled the result is an even
number. Or maybe when we look at store sales on a day
(like at each Wal-Mart) we are interested in the outcome of
sales being more than $1,000,000.
8
Contingency Table
A contingency table is a particular way to view a sample
space. Say we talk to 100 customers who just made a
purchase in a store and not only do we note the gender of
the customer we ask if they paid cash or used a credit card.
The responses are summarized on the next screen
9
Contingency Table
Payment Method
Gender
Cash Credit Card Total
Female
3
12
15
Male
17
68
85
20
80
100
Total
So, each of the 100 people observed had to be put in a
gender category and had to be given a payment method.
If you divide each number in the table by the grand total
(here 100 and this represents the total number of people
observed) the table is then called a joint probability table.
Let’s do this and see what results on the next screen.
10
Joint Probability Table
Payment Method
Gender
Cash Credit Card Total
Female
.03
.12
.15
Male
.17
.68
.85
.20
.80
1
Total
11
The Intersection of Events
The intersection of events A and B (A and B are general
ideas) is the EVENT that occurs when both A and B occur.
The way we write the intersection is to have
A and B or it may be written A ∩ B.
The way we write the probability of the intersection is to
have P(A and B) or it may be written P(A ∩ B).
From our example, if you look inside the joint probability
table the value .03 = P(Female and Cash)
12
Marginal Probabilities
The total column and the total row are called marginal
probabilities because the are written in the “margins” of the
table. But, note that adding across the Female row gives a
total of .15. This is the P(F), or the probability that you would
select a female when talking to someone involved in the
study.
The other marginal probabilities are
P(M) = .85
P(Cash) = .2 and P(Credit Card) = .8
13
Venn Diagram
A Venn diagram, named in honor of Mr. Venn, is another
way to present the sample space for two variables.
B
A
This area represents the intersection
of A and B.
The rectangle here
represents the sample
space. On one variable we
have event A and that takes
up the space represented
by circle A. Ignoring circle
B, all the rest of the
rectangle is Ac (the
complement of A). A similar
interpretation holds for B.
14
Joint Probability
Gender
Female (A)
Male (A’)
Total
Payment Method
Cash (B)
CC (B’)
Total
P(A and B) P(A and B’) P(A)
P(A’ and B) P(A’ and B’) P(A’)
P(B)
P(B’)
1.0
From the example I have labeled the gender A and A’ and
the payment method B and B’ and then I have filled the
table out in definitional form.
15
Marginal Probability
Note in the joint probability table on the previous screen
(which is a contingency table that has been modified by
dividing all numbers by the grand total!) that
1) In any row the marginal probability is the sum of the joint
probabilities in that row, and
2) In any column the marginal probability is the sum of the
joint probabilities in that column.
(Also note that the sum of the probability of complements
equals 1 –> for example P(A) + P(Ac) = 1
16
Union of Events – the General
Addition Rule
Sometimes we want to ask a question about the probability of A
or B, written P(A or B) = P(A ⋃ B).
By the general addition rule
P(A or B) = P(A) + P(B) – P(A and B).
In our example we have P(A or B) = .15 + 0.20 – 0.03 = 0.32.
Let’s think about this some more. How many are Female? 3 +
12 = 15! How many paid cash? 3 + 17 = 20! But 3 of these
were in both A and B. So, when we ask a question about A or
B we want to include all that are A or B, but we only want to
include them once. If they are both we subtract out the
intersection because it was included in both the row and
17
column total.
Final Comment
When we look at P(A and B) – the intersection of A and B both A and B have to have occurred to have a non-zero
value here.
When we look at P(A or B) – the union of A and B - either A
or B have to have occurred to have a non-zero value here.
But, many times both did occur and if there is an overlap
(intersection) of the two we have to subtract out the
overlap.
18
Conditional Probability
19
Conditional Probability
As we have seen, P(A) refers to the probability that event A will occur. A new
idea is that P(A|B) refers to the probability that A will occur but with the
understanding that B has already occurred and we know it. So, we say the
probability of A given B. The given B part means that it is known that B has
occurred.
By definition
P(A|B) = P(A and B)/P(B).
Similarly
P(B|A) = P(A and B)/P(A).
Note P(A and B) = P(B and A)
20
Now we have by definition
P(A|B) = P(A and B)/P(B).
In this definition, B has already occurred. The P(B) is the denominator of
P(A|B) and is thus the base of the conditional probability. The intersection of A
and B is in the numerator. Since B has occurred, the only way A can have
occurred is if there is an overlap of A and B. So we have the ratio
probability of overlap/probability of known event.
Let’s turn to the example from above. The joint probability table is repeated on
the next slide.
21
Joint Probability Table
Payment Method
Gender
Cash Credit Card Total
Female
.03
.12
.15
Male
.17
.68
.85
.20
.80
1
Total
Let’s say you saw someone pay cash for a purchase, but
your view was blocked as to the gender of the person. The
P(female│cash) = .03/.20 = .15
22
Joint Probability Table
Payment Method
Gender
Cash Credit Card Total
Female
.03
.12
.15
Male
.17
.68
.85
.20
.80
1
Total
Let’s say you saw someone a female leave the store having
made a purchase, but your view was blocked as to the type
of payment. The P(cash│female) = .03/.15 = .20
23
Independent Events
Events A and B are said to be independent if
P(A|B) = P(A) or P(B|A) = P(B).
In the example we have been using P(Female) = .15, and P(Female|Cash) = .15.
What is going on here? Well, in this example it turns out that the proportion of
females in the study is .15 and when you look at those who paid cash the
proportion of females is still .15. So, knowing that the person paid cash doesn’t
change your view of the likelihood that the person is female.
But, in some case (not here) having information about B gives a different view
about A. When P(A|B) ≠ P(A) we say events A and B are dependent events.
Similarly, when P(B|A) ≠ P(B) events A and B are dependent.
24
Does a coin have a memory? In other words, does a coin remember how many
times it has come up heads and will thus come up tails if it came up heads a lot
lately? Say A is heads on the third flip, B is heads on the first two flips. Is heads
on the third flip influenced by the first two heads. No, coins have no memory!
Thus A and B are independent. (Note I am not concerned here about the
probability of getting three heads!)
Have you ever heard the saying, “Pink sky in the morning, sailors take warning,
pink sky at night sailors delight.” I just heard about it recently. Apparently it is a
rule of thumb about rain. Pink sky in the morning would serve as a warning for
rain that day. If A is rain in the day and B is pink sky in the morning, then it seems
that the P(A|B) ≠ P(A) and thus the probability of rain is influenced by morning sky
color (color is really just an indicator of conditions).
25
Let’s think about one more example. If you watched a football team all year
you could use the empirical approach to find the probability that it will throw a
pass on a given play. Say P(pass)=0.4. This means the probability it will
pass on a given play is 0.4.
But, if there are 5 minutes left in the game and the team is down 14 points the
team will want to pass more. SO, P(pass|down 14 with 5 minutes left) = 0.75,
for example. This means the probability of a pass depends on the score and
time remaining!
26
I have used some examples to give you a feel about when events are
independent and when they are dependent.
By simple equation manipulation we change the conditional probability
definition to the rule called the multiplication law or rule for the intersection
of events:
P(A and B) = P(B)P(A⃒B)
or P(A and B) = P(A)P(B⃒A) .
Note the given part shows up in the other term.
Now this rule simplifies if A and B are independent. The conditional
probabilities revert to regular probabilities. We would then have
P(A and B) = P(B)P(A) = P(A)P(B).
Does this hold in our running example? Sure it does!
27
Say, as a new example, we have A and B with P(A)=.5, P(B)=.6 and P(A and B)
=.4
Then
a. P(A⃒B) = .4/.6 = .667
b. P(B⃒A) = .4/.5 = .8
c. A and B are not independent because we do NOT have P(A⃒B) = P(A), or
P(B⃒A) = P(B).
Say, as another example, we have A and B with P(A)=.3 and P(B)=.4 and here we
will say A and B are mutually exclusive. This means P(A and B) = 0 (in a
Venn Diagram A and B have no overlap), then
a. P(A⃒B) = 0/.4 = 0
Here A and B are not independent.
28
Y
Y1
X
X1
P(X1 and Y1)
X2
P(X2 and Y1)
Totals
P(Y1)
Y2
Totals
P(X1 and Y2) P(X1)
P(X2 and Y2)
P(Y2)
P(X2)
1.00
Here I put the joint probability table again in general terms. Question X has
mutually exclusive and collectively exhaustive events X1 and X2. For Y we have
a similar set-up. Note here each has only two responses, but what we will see
below would apply if there are more than 2 responses.
Let’s review some of the probability rules we just went through and then we will
add one more rule.
29
Inside the joint probability table we find joint probabilities (like P(X1 and Y1)
and in the margins we find the marginal probabilities (like P(X1)).
Marginal Probability Rule
P(X1) = P(X1 and Y1) + P(X1 and Y2)
General Addition Rule
P(X1 or Y1) = P(X1) + P(Y1) – P(X1 and Y1)
Conditional Probability
P(X1|Y1) = P(X1 and Y1)/P(Y1)
Multiplication Rule
P(X1 and Y1) = P(X1|Y1)P(Y1)
30
The new part is to view the marginal probability rule as taking each part and
use the multiplication rule. So,
Marginal Probability Rule
P(X1) = P(X1 and Y1) + P(X1 and Y2)
=P(X1|Y1)P(Y1) + P(X1|Y2)P(Y2)
Where Y1 and Y2 are mutually exclusive and collectively exhaustive.
31