Transcript P(H | B)
22. Chaos
Chaos Theory can be generally defined as the study of forever-changing
complex systems. Discovered by a meteorologist in 1960, chaos theory
contends that complex and unpredictable results will occur in systems that
are sensitive to small changes in their initial conditions. The most common
example of this, known as the "Butterfly Effect," states that the flapping of a
butterfly's wings in China could cause tiny atmospheric changes which over
a period of time could effect weather patterns in New York.
The butterfly effect, first described by Lorenz at the December 1972 meeting
of the American Association for the Advancement of Science in Washington,
D.C., vividly illustrates the essential idea of chaos theory. In a 1963 paper for
the New York Academy of Sciences, Lorenz had quoted an unnamed
meteorologist's assertion that, if chaos theory were true, a single flap of a
single seagull's wings would be enough to change the course of all future
weather systems on the earth. By the time of the 1972 meeting, he had
examined and refined that idea for his talk, "Predictability: Does the Flap of a
Butterfly's Wings in Brazil set off a Tornado in Texas?" The example of such
a small system as a butterfly being responsible for creating such a large and
distant system as a tornado in Texas illustrates the impossibility of making
predictions for complex systems; despite the fact that these are determined
by underlying conditions, precisely what those conditions are can never be
sufficiently articulated to allow long-range predictions.
1976 Robert May (USA)
a: Breeding rate, Xn: Population of n-th generation
a=3
Convergence
1-(1/a)=0.67
Xn
n
a=3.9
Xn
n
< Chaos Area >
61 Box
0
1
0
1
1
0
1
0
43 step Generation
60
Number of Black Box for each Generation
50
40
30
20
10
1 2 3 4 5 6 7 8 910
20
30
40
43
23. Fractal
A fractal is generally "a rough or fragmented geometric shape that can
be subdivided in parts, each of which is (at least approximately) a
reduced-size copy of the whole," a property called self-similarity. The
term was coined by Benoît Mandelbrot in 1975 and was derived from
the Latin fractus meaning "broken" or "fractured".
Blood tube: It is needed small volume and wide surface area.
24. Virtual reality
Virtual reality (VR) is a technology which allows a user to interact with a
computer-simulated environment, be it a real or imagined one. Most current
virtual reality environments are primarily visual experiences, displayed
either on a computer screen or through special stereoscopic displays, but
some simulations include additional sensory information, such as sound
through speakers or headphones. Some advanced, haptic systems now
include tactile information, generally known as force feedback, in medical
and gaming applications. Users can interact with a virtual environment or a
virtual artifact (VA) either through the use of standard input devices such as
a keyboard and mouse, or through multimodal devices such as a wired
glove, the Polhemus boom arm, and omnidirectional treadmill. The
simulated environment can be similar to the real world, for example,
simulations for pilot or combat training, or it can differ significantly from
reality, as in VR games. In practice, it is currently very difficult to create a
high-fidelity virtual reality experience, due largely to technical limitations on
processing power, image resolution and communication bandwidth.
However, those limitations are expected to eventually be overcome as
processor, imaging and data communication technologies become more
powerful and cost-effective over time.
25. Bayes’ Theorem
Bayes' theorem (also known as Bayes' rule or Bayes' law) is a result in
probability theory, which relates the conditional and marginal probability
distributions of random variables. In some interpretations of probability,
Bayes' theorem tells how to update or revise beliefs in light of new evidence
a posteriori.
The probability of an event A conditional on another event B is generally
different from the probability of B conditional on A. However, there is a
definite relationship between the two, and Bayes' theorem is the statement
of that relationship.
Bayes' theorem relates the conditional and marginal probabilities of
stochastic events A and B:
Each term in Bayes' theorem has a conventional name:
P(A) is the prior probability or marginal probability of A. It is "prior" in the
sense that it does not take into account any information about B.
P(A|B) is the conditional probability of A, given B. It is also called the
posterior probability because it is derived from or depends upon the
specified value of B.
P(B|A) is the conditional probability of B given A.
P(B) is the prior or marginal probability of B, and acts as a normalizing
constant.
L(A|B) is the likelihood of A given fixed B. Although in this case the
relationship P(B | A) = L(A | B), in other cases likelihood L can be multiplied
by a constant factor, so that it is proportional to, but does not equal
probability P.
A
B
P(A | B) P(B) = P(B | A) P(A)
If P(B)= P(A), then P(A | B) = P(B | A)
Example 1
Bayes' theorem is useful in evaluating the result of drug tests. Suppose a certain
drug test is 99% sensitive and 99% specific, that is, the test will correctly identify
a drug user as testing positive 99% of the time, and will correctly identify a nonuser as testing negative 99% of the time. This would seem to be a relatively
accurate test, but Bayes' theorem will reveal a potential flaw. Let's assume a
corporation decides to test its employees for opium use, and 0.5% of the
employees use the drug. We want to know the probability that, given a positive
drug test, an employee is actually a drug user. Let "D" be the event of being a
drug user and "N" indicate being a non-user. Let "+" be the event of a positive
drug test. We need to know the following:
Pr(D), or the probability that the employee is a drug user, regardless of any
other information. This is 0.005, since 0.5% of the employees are drug users.
Pr(N), or the probability that the employee is not a drug user. This is 1-Pr(D), or
0.995.
Pr(+|D), or the probability that the test is positive, given that the employee is a
drug user. This is 0.99, since the test is 99% accurate.
Pr(+|N), or the probability that the test is positive, given that the employee is not
a drug user. This is 0.01, since the test will produce a false positive for 1% of
non-users.
Pr(+), or the probability of a positive test event, regardless of other information.
This is 0.0149 or 1.49%, which is found by adding the probability that the test
will produce a true positive result in the event of drug use (= 99% x 0.5% =
0.495%) plus the probability that the test will produce a false positive in the
event of non-drug use (= 1% x 99.5% = 0.995%).
Given this information, we can compute the probability that an employee who
tested positive is actually a drug user:
Despite the high accuracy of the test, the probability that the employee is
actually a drug user is only about 33%. The rarer the condition for which we are
testing, the greater the percentage of positive tests that will be false positives.
This illustrates why it is important to do follow-up tests.
0.5%
99.5%
Drug User
Drug Non-User
1%
(+)
99%
P(D | +) P(+) = P(+ | D) P(D)
P(D | +) =
P(+ | D) P(D)
P(+)
=
P(+ | D) P(D)
P(+ | D) P(D) + P(+ | N) P(N)
=
0.99 x 0.005
0.99 x 0.005 + 0.01 x 0.995
= 0.3322
Example 2
Suppose there are two bowls full of cookies. Bowl #1 has 10 chocolate chip
cookies and 30 plain cookies, while bowl #2 has 20 of each. Fred picks a
bowl at random, and then picks a cookie at random. We may assume there is
no reason to believe Fred treats one bowl differently from another, likewise
for the cookies. The cookie turns out to be a plain one. How probable is it that
Fred picked it out of bowl #1?
Intuitively, it seems clear that the answer should be more than a half, since
there are more plain cookies in bowl #1. The precise answer is given by
Bayes' theorem. But first, we can clarify the situation by rephrasing the
question to "what’s the probability that Fred picked bowl #1, given that he has
a plain cookie?” Thus, to relate to our previous explanation, the event A is
that Fred picked bowl #1, and the event B is that Fred picked a plain cookie.
To compute Pr(A|B), we first need to know:
Pr(A), or the probability that Fred picked bowl #1 regardless of any other
information. Since Fred is treating both bowls equally, it is 0.5.
Pr(B), or the probability of getting a plain cookie regardless of any information
on the bowls. In other words, this is the probability of getting a plain cookie
from each of the bowls. It is computed as the sum of the probability of getting
a plain cookie from a bowl multiplied by the probability of selecting this bowl.
We know from the problem statement that the probability of getting a plain
cookie from bowl #1 is 0.75, and the probability of getting one from bowl #2 is
0.5, and since Fred is treating both bowls equally the probability of selecting
any one of them is 0.5. Thus, the probability of getting a plain cookie overall
is 0.75×0.5 + 0.5×0.5 = 0.625. Or to put it more simply, the proportion of all
cookies that are plain is 50 out of 80 = 50/80 = 0.625, though this relationship
depends on the fact that there are 40 cookies in both bowls.
Pr(B|A), or the probability of getting a plain cookie given that Fred has
selected bowl #1. From the problem statement, we know this is 0.75, since
30 out of 40 cookies in bowl #1 are plain.
Given all this information, we can compute the probability of Fred having
selected bowl #1 given that he got a plain cookie, as such:
As we expected, it is more than half.
Each bowl selection probability
0.5
Bowl #1
30ps
10ps
0.5
Bowl #2
P(B2)=0.5
20ps
P(B1)=0.5
20ps
cookie
chocolate
probability of a getting cookie in Bowl #1 ?
P(B1)=P(B2)=0.5
P(C): Probability of a getting cookie
When Bowl #1, P(C|B1)= 30/40= 0.75
When Bowl #2, P(C|B2)=20/40=0.5
therefore,
P(B1 |C) =
P(B1) P(C|B1)
P(B1) P(C|B1) + P(B2) P(C|B2)
=
0.5x0.75
0.5x0.75 + 0.5x0.5
= 0.6
50%
Bowl #1
50%
Bowl #2
10pcs. 20pcs.
Chocolate
30pcs. 20pcs. Cookie
P(B1 | C) P(C) = P(C | B1) P(B1)
P(B1 | C) =
=
P(C | B1) P(B1)
P(C)
P(C | B1) P(B1)
P(C | B1) P(B1) + P(C | B2) P(B2)
(30/40) x 0.05
=
= 0.6
(30/40) x 0.05 + (20/40) x 0.05
Ex.3
Stock Holder: H
Non Stock Holder: N
10%
90%
(1M$ Saver): $
70%
30%
P(H | $) P($) = P($ | H) P(H)
P(H | $) =
P($ | H) P(H)
P($)
P($ | H) P(H)
=
P($ | H) P(H) + P($ | N) P(N)
=
0.7 x 0.1
0.7 x 0.1 + 0.3 x 0.9
H:10% N:90%
$
50% 50%
P(H | $) = 0.1
= 0.2059
Ex.4
Car Holder: H
Non Car Holder: N
30%
70%
(New Car Purchaser ):B
80%
20%
P(H | B) P(B) = P(B | H) P(H)
P(H | B) =
P(B | H) P(H)
P(B)
P(B| H) P(H)
=
P(B | H) P(H) + P(B | N) P(N)
=
0.8 x 0.3
0.8 x 0.3 + 0.2 x 0.7
= 0. 6316