29 Probability PowerPoint

Download Report

Transcript 29 Probability PowerPoint

Programme 29: Probability
PROGRAMME 29
PROBABILITY
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events and probabilities
Probabilities of combined events
Conditional probability
Probability distributions
Continuous probability distributions
Standard normal curve
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events and probabilities
Probabilities of combined events
Conditional probability
Probability distributions
Continuous probability distributions
Standard normal curve
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Random experiments
Events
Sequences of random experiments
Combining events
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Random experiments
The unknown quantity whose value we are trying to find by performing
the experiment is called the result of the experiment and its value, found
from an experiment, is called an outcome of the result.
A result can be anticipated but its actual value – the outcome – is
unknown until the experiment is completed.
An experiment with a result with more than one possible outcome is
referred to as a random experiment.
The outcomes of a random experiment must be mutually exclusive.
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events
Whilst the completion of a random experiment will be a single outcome
we may not be interested in the specific outcome but whether the outcome
lies within a range of possible outcomes.
To cater for ranges of possible outcomes we define an event. An event
consists of one or more outcomes selected from a list of all possible
outcomes.
An event consisting of a single outcome is called a simple event.
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Sequences of random experiments
When two or more random experiments are
performed one after the other, the final
outcome of the sequence of experiments will
consist of combinations of the outcomes of
the individual experiments.
For example, the two random experiments of
tossing a silver coin followed by tossing a
copper coin. We shall list the possible
outcomes of the first experiment as SH, ST
where S stands for silver and of the second
as CH, CT where C stands for copper. We
can describe this sequence of experiments
using an outcome tree.
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Combining events
Events can be combined using or and and. For example, in the
previous sequence of experiments of tossing a silver coin and
then tossing a copper coin we could define the events:
D : One silver head or At least one tail
E : One silver head and A copper tail
in which case:
D consists of (SH, CH), (SH, CT), (ST, CH) and (ST, CT)
E consists of (SH, CT)
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events and probabilities
Probabilities of combined events
Conditional probability
Probability distributions
Continuous probability distributions
Standard normal curve
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events and probabilities
Probabilities of combined events
Conditional probability
Probability distributions
Continuous probability distributions
Standard normal curve
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Events and probabilities
Probability
Assigning probabilities
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Events and probabilities
Probability
If you tossed a fair coin then there would be 1 chance in 2 of it
landing head face uppermost.
If you select six numbers between 1 and 49 for a lottery ticket then
there are nearly 14 million possible different selections of six numbers
so your selection has a very small chance of winning – in fact it is 1
chance in 13 983 816.
The chance of something happening can be high and can be low but
we really want to be more precise than that and quantify chance in a
way that makes predicting the future more accurate and more
consistent. To do this we use the idea of probability.
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Events and probabilities
Assigning probabilities
In the random experiment of tossing a coin that has a head on both sides – a
double-headed coin the event H is a certain event and we define the
probability of a certain event as unity:
P(H) = P(Certainty) = 1
The event T is impossible. We define the probability of an impossible event
as zero:
P(T) = P(Impossibility) = 0
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Events and probabilities
Assigning probabilities
If the double-headed coin is replaced by a normal coin possessing a tail as
well as a head then the event H is no longer certain and the event T is no
longer impossible. In both cases the events lie somewhere between
certainty and impossibility so their probabilities lie somewhere between
zero and unity:
0 < P(H) < 1 and 0 < P(T) < 1
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Events and probabilities
Assigning probabilities
Assigning numbers to these probabilities can be problematic but what we
do say is that when a normal coin is tossed it is certain to show either a
head or a tail and so the two probabilities must add up to the probability of
certainty, that is unity:
P(H) + P(T) = 1
Probabilities can be assigned to the events of a random experiment either
beforehand – we call it a priori – or afterwards by statistical regularity.
When we have assigned probabilities to every possible simple event of a
random experiment we have what is called a probability distribution.
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events and probabilities
Probabilities of combined events
Conditional probability
Probability distributions
Continuous probability distributions
Standard normal curve
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events and probabilities
Probabilities of combined events
Conditional probability
Probability distributions
Continuous probability distributions
Standard normal curve
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probabilities of combined events
Or
Non-mutually exclusive events
And
Dependent events
Independent events
Probability trees
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probabilities of combined events
Or
Let A and B be two events associated with a random experiment. These two
events can be connected using or to form the event
C: C = A or B
In other words either event A occurs or event B occurs or both. This is an
inclusive or because it permits both events to occur simultaneously. If A
and B are mutually exclusive they contain no outcomes in common, in
which case
P(A or B) = P(A) + P(B)
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probabilities of combined events
Non-mutually exclusive events
If events A and B have outcomes that are common to both then because or
is inclusive, when we add together outcomes that are in either A or B we
add in twice those outcomes that are in both. Therefore we must subtract
once those that are in both. Therefore:
P(A or B) = P(A) + P(B) – P(both A and B)
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probabilities of combined events
And
Two random experiments are performed in sequence where A is an event
associated with the first experiment and B an event associated with the
second experiment. These two events can be connected via the word and
to form the event C where:
C = A and B
That is, both events A and B occur. Furthermore:
P(A and B) = P(A)P(B)
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probabilities of combined events
Dependent events
If two random experiments are performed in sequence, one after the other,
then it may be possible for the outcome of the first experiment to affect the
outcome of the second experiment. If this is the case then the outcomes are
dependent upon each other and the probabilities change after the first
experiment has been performed.
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probabilities of combined events
Independent events
If the outcome of the second experiment is unaffected by the outcome of
the first experiment then the events are independent of each other and the
probabilities will not change after the first experiment has been performed.
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probabilities of combined events
Probability trees
We are already familiar with the idea of a sequence of random experiments
and the outcome tree that results from it. If we now list the probabilities
against each outcome of the tree we construct what is called a probability tree.
For example, in a factory items pass through two processes, namely cleaning
and painting. The probability that an item has a cleaning fault is 0.2 and the
probability that an item has a painting fault is 0.3. Cleaning and painting
faults occur independently of each other so that:
Probability of a cleaning fault P(C)= 0.2
Probability of no cleaning fault P(NC) = 0.8
and
Probability of a painting fault P(P) = 0.3
Probability of no painting fault P(NP) = 0.7
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probabilities of combined events
Probability trees
For example, in a factory items pass through two processes, namely cleaning
and painting. The probability that an item has a cleaning fault is 0.2 and the
probability that an item has a painting fault is 0.3. Cleaning and painting faults
occur independently of each other so that:
Probability of a cleaning fault P(C) = 0.2
Probability of no cleaning fault P(NC) = 0.8
and
Probability of a painting fault P(P) = 0.3
Probability of no painting fault P(NP) = 0.7
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probabilities of combined events
Probability trees
This gives rise to the following probability tree:
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events and probabilities
Probabilities of combined events
Conditional probability
Probability distributions
Continuous probability distributions
Standard normal curve
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events and probabilities
Probabilities of combined events
Conditional probability
Probability distributions
Continuous probability distributions
Standard normal curve
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Conditional probability
We are concerned here with the probability of an event B occurring,
given that an event A has already taken place. This is denoted by the
symbol P(B|A). If A and B are independent events, the fact that event
A has already occurred will not affect the probability of event B. In
that case:
P(B|A) = P(B)
If A and B are dependent events, then event A having occurred will
affect the probability of the occurrence of event B.
If A and B are independent events: P(A and B) = P(A)P(B)
If A and B are dependent events: P(A and B) = P(A)P(B|A)
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events and probabilities
Probabilities of combined events
Conditional probability
Probability distributions
Continuous probability distributions
Standard normal curve
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events and probabilities
Probabilities of combined events
Conditional probability
Probability distributions
Continuous probability distributions
Standard normal curve
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability distributions
Random variables
Expectation
Variance and standard deviation
Bernoulli trials
Binomial probability distribution
Expectation and standard variation
The Poisson probability distribution
Binomial and Poisson compared
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability distributions
Random variables
Every random experiment gives rise to a collection of mutually exclusive
outcomes, each with an associated probability of that outcome occurring.
This collection of probabilities is called the probability distribution of the
random experiment and we have seen how a probability distribution can be
divined either from a relative frequency distribution using the notion of
statistical regularity or from a priori considerations.
Whichever method we choose to create the probability distribution the
process can be greatly assisted with the notion of a random variable x that
is created by coding each outcome with a number.
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability distributions
Expectation
Every permitted value of a random variable x associated with a random
experiment has a probability P(x) of being realized. In analogy with defining
the average value of a collection of data as the sum of the product of each
datum value with its relative frequency, we can define the average value of a
random variable as the sum of the product of each of its values with its
respective probability:
n
m = å xr P(xr )
r=1
Here, the average value of the random variable is called the expectation of x,
denoted by E(x). That is:
m = E(x), the expectation of x
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability distributions
Variance and standard deviation
The spread of the values of the random variable about the mean (the
expectation) is given as the expectation of the square of the deviations from
the mean (the variance). That is
(
s 2 = E [ x - m]
n
2
)
= å ( xr - m ) P(xr )
2
r=1
Where s is the standard deviation
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability distributions
Bernoulli trials
A Bernoulli trial is any random experiment whose result has only two
outcomes which we shall call success with probability p and failure with
probability q.
P(success) = p and P(failure) = q where p + q = 1
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability distributions
Binomial probability distribution
The binomial probability distribution is concerned with the probability of r
successes in n Bernoulli trials and is given by:
P(r : n) = n Cr p r q n-r where nCr =
n!
(n - r)!r!
p is the probability of success, q is the probability of failure and p + q = 1.
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability distributions
Expectation and standard variation
The expectation of a binomial probability distribution is given as:
m = np
The standard deviation of a binomial probability distribution is given as:
s = npq
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability distributions
The Poisson probability distribution
If we know the average number of occurrences of an event during a fixed
period of time then the Poisson probability distribution will enable us to
compute the probabilities of 0, 1, 2, 3, ... occurrences during that same
interval of time. The probabilities are given by:
P(r) =
l r e- l
r!
with mean and variance both equal to λ (so standard deviation is √λ ). Here λ
is the average number of occurrences during a fixed period of time and r is a
positive integer.
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability distributions
Binomial and Poisson compared
If the mean of the Poisson probability distribution λ is less than 5, the
probabilities obtained from the Poisson distribution are a good
approximation to those obtained using the binomial probability distribution,
particularly if the number of trials n is large (n ≥ 50) and the probability of
success p is small (p ≤ 0:1); what are called rare events. For this reason, it
can be more convenient to calculate probabilities using the Poisson
distribution because the calculations involved are more easily performed. In
such a case we take λ = μ where μ = np, the mean of the binomial
probability distribution.
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events and probabilities
Probabilities of combined events
Conditional probability
Probability distributions
Continuous probability distributions
Standard normal curve
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events and probabilities
Probabilities of combined events
Conditional probability
Probability distributions
Continuous probability distributions
Standard normal curve
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Continuous probability distributions
Normal distribution curve
The equation of the normal curve is
1
- 12 ( x- m )2 /s 2
y=
e
s 2p
Where μ = mean and σ = standard
deviation of the distribution.
This equation is not at all easy to deal
with. In practice, it is convenient to
convert a normal distribution into a
standardized normal distribution having a
mean of 0 and a standard deviation of 1.
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events and probabilities
Probabilities of combined events
Conditional probability
Probability distributions
Continuous probability distributions
Standard normal curve
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Probability
Events and probabilities
Probabilities of combined events
Conditional probability
Probability distributions
Continuous probability distributions
Standard normal curve
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Standard normal curve
The conversion from normal distribution to standard normal distribution is
achieved by the substitution of the standard normal variable z where
z=
x-m
s
This moves the distribution curve along the x-axis and reduces the scale of
the horizontal units by dividing by σ. To keep the total area under the curve
at unity, we multiply the y-values by σ. The equation of the standardized
normal curve then becomes:
1 - z2 /2
y = f (z) =
e
2p
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Standard normal curve
z=
x-m
s
1 - z2 /2
y = f (z) =
e
2p
STROUD
Worked examples and exercises are in the text
Programme 29: Probability
Learning outcomes









Understand what is meant by a random experiment
Distinguish between the result and an outcome of a random experiment
Recognize that, whilst outcomes are mutually exclusive, events may not be
Combine events and construct an outcome tree for a sequence of random experiments
Assign probabilities to events and distinguish between a priori and statistical
regularity
Distinguish between mutually exclusive and non-mutually exclusive events and
compute their probabilities
Distinguish between dependent and independent events and apply the multiplication
law of probabilities
Compute conditional probabilities
Use the binomial and Poisson probability distributions to calculate probabilities Use
the standard normal probability distribution to calculate probabilities
STROUD
Worked examples and exercises are in the text