# Probability

15.1.Introduction
Just think of these ….
(i) A missing helicopter is reported to have crashed somewhere in the rectangular region shown in figure. What is the probability that it crashed inside the lake shown in the figure?
(ii) A carton consists of 100 shirts of which 88 are good, 8 have miner defects and 4 have major defects. Jimmy, a trader, will only accept the shirts which are good, bus Sujatha, another trader, will only reject the shirts which have major defects. One shirt is drawn at random from the carton. What is the probability that
(a) It is acceptable to Jimmy?
(b) It is acceptable to Sujatha?
To answer these questions, we need to definitely understand this topic
Probability
It is remarkable that a science which began with the consideration of games of chance should be elevated to the rank of the most important subject of human knowledge.
In everyday life, we come across statements such as
(1) It will probably rain today.
(2) I doubt that he will pass the test.
(3) Most probably, Kavita will stand first in the annual examination.
(4) Chances are high that the prices of diesel will go up.
(5) There is a 50-50 chance of India winning the toss in today’s match.
The words ‘probably’, ‘doubt’, ‘most probably’, ‘chances’, etc., used in the statements above involve an element of uncertainty.
Use of Probability
The uncertainty of ‘probably’ etc can be measured numerically by means of ‘probability’ in many cases.
Though probability started with gambling, it has been used extensively in the fields of Physical Sciences, Commerce, Biological Sciences, Medical Sciences, Weather Forecasting, etc.
History
The concept of probability developed in a very strange manner. In 1654, a gambler, Chevalier de Mere, approached the well-known ${17}^{th}$ century French philosopher and mathematician, Blaise Pascal, regarding certain problems related to dice.
Pascal became interested in these problems, studied them and discussed them with another French mathematician, Pierre de Fermat. Both Pascal and Fermat solved the problems independently. This work was the beginning of the Probability Theory.
The first book on the subject was written by the Italian mathematician, J. Cardan (1501 – 1576). The title of the book was ‘Book on Games of Chance’ (Liber de Ludo Aleae), published in 1663. Notable contributions were also made by mathematicians J. Bernoulli (1654-1705), P.Laplace (1749-1827), A. A. Markov (1856-1922) and A. N. Kolmogorov (born 1903).
15.2. Terms & Definitions
1. Experiment
An operation which results in some well defined outcomes is called an Experiment.
2. Random experiment
Any action which gives one or more results of an experiment is called an outcome of the experiment.
In other words, if an experiment is performed many times under similar conditions and the outcome is not the same each time, then such an experiment is called a random experiment.
Example
Testing of a fair coin is a random experiment because if we toss a coin either heads or tails will come up; but if we toss a coin again and again, the outcome each time will not be the same.
3. Sample space
The set of all possible outcomes of a random experiment is called the sample space for the experiment. It is usually denoted by S.
Example
When a coin is tossed, either heads or tails will come up. If ‘H’ denotes the occurrence of heads ‘T’ denotes the occurrence of tails, then
Sample space S = {H, T}
4. Sample point (or) Event point
Each element of the sample space is called a sample point of an event point.
Example
When a die is thrown, sample space S = {1, 2, 3, 4, 5, 6}
Here 1, 2, 3, 4, 5 and 6 are the sample points.
Types of Events
i) Simple event or Elementary event: An event is called ‘simple event’, if it is a singleton subset of the sample space ‘S’.
Example: When a coin is tossed, sample space S = {H, T}
Let A = {H} = the event of occurrence of heads and B = {T} = the event of occurrence of tails. Here ‘A’ and ‘B’ are simple events.
ii) Mixed event or compound event: A subset of the sample space ‘S’ which contains more than one element is called a compound event.
Example: When a die is thrown,
Sample space S = {1, 2, 3, 4, 5, 6}
Let A = {1, 3, 5} = the event of occurrence of odd number.
B = {5, 6} = the event of occurrence of a number greater than 4.
Here ‘A’ and ‘B’ are compound events.
iii) Sure event: If a random experiment ‘E’ has a discrete sample space ‘S’, then ‘S’ itself is an event (E = S) called the sure or certain event of ‘E’.
Example: Getting heads or tails in a single toss of a coin in a sure event.
iv) Impossible event : The empty subset ‘$\mathrm{\varphi }$’ of ‘S’ (E = { }) is called the impossible event or null event of ‘E’.
Example: Getting heads and tails, both in a single toss of a coin is an impossible event.
v) Complementary event: For a random event ‘A’ of a random experiment ‘E’, the event complementary to ‘A’ is the event that “A does not occur”. It is denoted by A’ or ${A}^{c}$ or $\overline{)A}$.
vi) Equally likely events : Events are said to be equally likely when there is no reason to expect any one of them rather than any one of the others.
Example: When an unbiased die is thrown, all the six faces 1, 2, 3, 4, 5 and 6 are equally likely to come up.
15.3. Probability
Consider the drawing of a card from a pack of cards which contains 52 cards. Then n(S) = 52.
Let ‘E’ be the event that an ace is drawn. Since there are four aces, n(E) = 4. i.e., ‘E’ can happen in any one of 4 ways, but a card can be drawn in 52 ways.
$\therefore$  ‘E’ has 4 chances out of 52 to happen. This is expressed by saying that the probability (chance) that ‘E’ happens and is given by $P\left(E\right)=\frac{4}{52}=\frac{n\left(E\right)}{n\left(S\right)}$
Definition : If ‘S’ is the finite sample space of an experiment and every outcome of ‘S’ is equally likely and ‘E’ is an event (i.e., E $\subset$ S), then the probability that ‘E’ takes place is defined as

Classical Definition of Probability
The probability of an event E to occur is the ratio of the number of cases in its favour to the total number of cases
$P\left(E\right)=\frac{n\left(E\right)}{n\left(S\right)}$

Note : Since ‘S’ contains all possible outcomes of the experiment and ‘E’ contains those outcomes in which ‘E’ happens, i.e., outcomes favourable to ‘E’, we can say that
$P\left(E\right)=\frac{n\left(E\right)}{n\left(S\right)}$

Note:
i) Number of ways of selecting ‘r’ different things out of ‘n’ different things. is $C_{r}^{n}=\frac{n!}{\left(n–r\right)!r!}=\frac{\left(n\left(n–1\right)\left(n–2\right)\dots \dots \left(n–r+1\right)\right)}{r!}$
Activity
(i) Take any coin, toss it ten times and note down the number of times heads and tails come up. Record your observations in the form of following table.

 Number of times the coin is tossed Number of times the head comes up Number of times the tail comes up 10 – –

Write down the values of the following fractions:
;
(ii) Toss the coin twenty times and in the same way record your observations as above. Again find the values of the fractions given above for this collection of observations.
(iii) Repeat the same experiment by increasing the number of tosses and record the number of heads and tails. Then find the values of the corresponding fractions.
You will find that as the number of tosses gets larger, the values of the fractions come closer to 0.5. To record what happens in more and more tosses, the following group activity can also be performed :
Activity
Divide the class into groups of 2 or 3 students each. Let a student in each group toss a coin 15 times. Another student in each group should record the observations regarding heads and tails. [Note that coins of the same denomination should be used in all the groups. It will be treated as if only one coin has been tossed by all the groups.]
Now, on the blackboard, make a table like Table. First, Group 1 can write down its observations and calculate the resulting fractions. Then Group 2 can write down its observations, but will calculate the fractions for the combined data of Groups 1 and 2, and so on. (We may call these fractions as cumulative fractions.) We have noted the first three rows based on the observations given by one class of students.

 Group (1) Number of heads (2) Number of t ails (3) (4) (5) 1 3 12 $\frac{3}{15}$ $\frac{12}{15}$ 2 7 8 $\frac{7+3}{15+15}=\frac{10}{30}$ $\frac{8+12}{15+15}=\frac{20}{30}$ 3 8 8 $\frac{7+10}{15+30}=\frac{17}{45}$ $\frac{8+20}{15+30}=\frac{28}{45}$ 4 . . . . . . . . . . . .

What do you observe in the table? You will find that as the total number of tosses of the coin increases, the value of the fractions in Columns (4) and (2) come nearer and nearer to 0.5.

 Number of times a die is thrown Number of times these scores turn up 20 1 2 3         4 5 6

Find the values of the following fractions :

ii) Now throw the die 40 times; record the observations and calculate the fractions as done in (i). As the number of throws of the die increases, you will find that the value of each fraction calculated in (i) and (ii) comes closer and closer to $\frac{1}{6}$
Analysis of activities
In Activity 1, each toss of a coin is called a trial. Similarly in Activity 3, each throw of a die is a trial. So, a trial is an action which results in one or several outcomes. The possible outcomes in Activity 1 were Head and Tail; whereas in Activity 3, the possible outcomes were 1, 2, 3, 4, 5 and 6.
In Activity1, the getting of a head in a particular throw is an event with outcome ‘head’. Similarly, getting a tail is an event with outcome ‘tail’. In Activity 2, the getting of a particular number, say 1, is an event with outcome.
With this background, let us now see what probability is. Based on what we directly observe as the outcomes of our trials, we find experimental or empirical probability.
Let n be the total number of trials. The empirical probability P(E) of an event E happening, is given by
P(E)

In this chapter, we shall be finding the empirical probability, though we will write ‘probability’ for convenience.
Example 1
A coin is tossed 1000 times with the following frequencies :
Heads : 455, Tails : 545
Compute the probability for each event.
Solution
Since the coin is tossed 1000 times, the total number of trials is 1000. Let us call the events of getting a head and of getting a tail as E and F, respectively. Then, the number of times E happens, i.e., the number of times a head come up, is 455.
So, the probability of E

i.e., $P\left(E\right)=\frac{455}{1000}=0.455$
Similarly, the probability of the event of getting a tail

i.e., $P\left(F\right)=\frac{545}{1000}=0.545$
Note that in the example above, P(E) + P(F) = 0.455 + 0.545 = 1, and E and F are the only two possible outcomes of each trial.
Example 2
Two coins are tossed simultaneously 500 times, and we get
Find the probability of occurrence of each of these events.
Solution
Let us denote the events of getting two heads, one head and no head by ${E}_{1}$, ${E}_{2}$ and ${E}_{3}$, respectively. So,
$P\left({E}_{1}\right)=\frac{105}{500}=0.21$

$P\left({E}_{2}\right)=\frac{275}{500}=0.55$

$P\left({E}_{3}\right)=\frac{120}{500}=0.24$
Observe that P(${E}_{1}$) + P(${E}_{2}$) + P(${E}_{3}$) = 1. Also ${E}_{1}$, ${E}_{2}$ and ${E}_{3}$ cover all the outcomes of a trial.
Example 3
A die is thrown 1000 times with the frequencies for the frequencies for the outcomes 1, 2, 3, 4, 5 and 6 as given in the following table :

 Outcome 1 2 3 4 5 6 Frequency 179 150 157 149 175 190

Find the probability of getting each outcome.
Solution
Let ${E}_{i}$ denote the event of getting the outcome i, where i = 1, 2, 3, 4, 5, 6. Then
Probability of the outcome 1= $=\frac{179}{1000}=0.179$
Similarly, $P\left({E}_{2}\right)=\frac{150}{1000}=0.15,P\left({E}_{3}\right)=\frac{157}{1000}=0.157$

$P\left({E}_{4}\right)=\frac{149}{1000}=0.149,P\left({E}_{3}\right)=\frac{175}{1000}=0.175$

and $P\left({E}_{6}\right)=\frac{190}{1000}=0.19$
Note that P(${E}_{1}$) + P(${E}_{2}$) + P(${E}_{3}$) + P(${E}_{4}$) + P(${E}_{5}$) + P(${E}_{6}$) = 1
Also note that :
(i) The probability of each event lies between 0 and 1.
(ii) The sum of all the probabilities is 1.
(iii) ${E}_{1}$, ${E}_{2}$, …., ${E}_{6}$ cover all the possible outcomes of a trial of a dice.
Results based on the definition of probability
The following results are the direct consequences of the definition of probability.
i) If ‘E’ is an event of sample space ‘S’, then 0 $\le$ P(E) $\le$ 1
Proof
Since E $\subset$ S, we have
$⇒\frac{0}{n\left(S\right)}\le \frac{n\left(E\right)}{n\left(S\right)}\le \frac{n\left(S\right)}{n\left(S\right)}$
$⇒$ 0 $\le$ P(E) $\le$ 1 as required.
Note that P(E) = 0 if and only if ‘E’ is an impossible event and P(E) = 1 if and only if ‘E’ is a certain event.
ii) If ‘E’ is an event of sample space ‘S’ and ${E}^{|}$ (or $\overline{)E}$) is the event that E does not happen, then P(E’) = 1 – P(E).
Proof
Since ‘${E}^{|}$’ is the event that ‘E’ does not happen, ‘${\mathrm{E}}^{|}$’ is the complement of ‘E’. Therefore, all the members of ‘S’ which are not in ‘E’ are in ‘${E}^{|}$’.
$\therefore$ n(E) + n(${E}^{|}$) = n(S).
Dividing by n(S), we get
$\frac{n\left(E\right)}{n\left(S\right)}+\frac{n\left({E}^{|}\right)}{n\left(S\right)}=\frac{n\left(S\right)}{n\left(S\right)}⇒P\left(E\right)+P\left({E}^{|}\right)=1$
$\therefore$ P(${E}^{|}$) = 1 – P(E), as required.
15.4. Combination of two events
i) Union of events : If A and B are two events of the sample space S, then A $\cup$ B (or A + B) is the event that either A or B (or both) take place.
ii) Intersection of events : If A and B are two events of the sample space S, then A $\cup$ B (or A + B) is the event that either A or B (or both) take place.
iii) Mutually exclusive events : If A and B are two events of the sample space S, then A $\cup$ B (or A + B) is the event that either A or B (or both) take place.
Example : When two coins are tossed, the number of elementary events is 4 and they are (H, H), (H, T), (T, H), (T, T). These are mutually exclusive.
iv) Exhaustive event : Two events A and B of the sample space S are said to be exhaustive if A $\cup$ B = S, i.e., A $\cup$ B contains all sample points.
Example : In tossing a coin, there are two exhaustive elementary events. They are head and tail.
Note
i) A and ${A}^{|}$ are mutually exclusive as well as exhaustive events, as
A $\cap$ ${A}^{|}$ { } and A $\cup$ ${A}^{|}$ $\equiv$ S.
ii) A – B denote the occurrence of event A but not B. Thus,
A – B occurs $⇔$ A occurs and B does not occur.
Clearly.
A – B = A $\cap$ ${\mathrm{B}}^{|}$, B – A = B $\cap$ ${A}^{|}$
Statement
If A and B be any two events in a sample space S, then the probability of occurrence of atleast one of the events A and B is given by P(A $\cup$ B) = P(A) + P(B) – P(A $\cap$ B).
Proof
From set theory, we know that n(A $\cup$ B)
= n(A) + n(B) – n(A $\cap$B).
Dividing both sides by n(S), we get$\frac{n\left(A\cup B\right)}{n\left(S\right)}=\frac{n\left(A\right)}{n\left(S\right)}+\frac{n\left(B\right)}{n\left(S\right)}–\frac{n\left(A\cap B\right)}{n\left(S\right)}$

$⇒$ P(A $\cup$B) = P(A) + P(B) – P(A $\cap$B).
Note
i) If A and B are mutually exclusive events, then A $\cap$B = $\mathrm{\varphi }$ and hence, P(A $\cap$ B) = 0.
ii) Two events A and B are mutually exclusive if and only if
P(A$\cup$B) = P(A) + P(B).
Statement: If A, B and C are three events in a sample space S, then
P( A $\cup$ B $\cup$ C) = P(A) + P(B) + P(C) – P(A $\cap$ B) – P(B $\cap$ C) P(A $\cap$ C) + P( A $\cap$ B $\cap$C)
Proof: From set theory, we know that
P( A – B) = P(A) + P(B) – P(A $\cap$ B)
Now n( A $\cup$ B $\cup$ C) = n[( A $\cup$( B $\cup$ C)]]
= n(A) + n( B $\cup$ C) – n[ A $\cap$ ( B $\cup$C)]
= n(A) + n( B $\cup$ C) – [n (A $\cap$B) $\cup$ n(A $\cup$ C)]
= n(A) + n( B $\cup$ C) – n[D E], Where D = (A $\cap$ B) and E = (A $\cup$C)
= n(A) + n( B $\cup$ C) – [n(D) + n(E) – n(D $\cap$E)]
= n(A) + n(B) + n(C) – n(B $\cap$ C) n(D) – n(E)+ n(D $\cap$E)]
= n(A) + n(B) + n(C) – n(B $\cap$ C) – n(A $\cap$B) – n(A $\cap$ C) + n(A $\cap$ B $\cap$ C)
[ $\because$ D $\cap$ E =(A B) $\cap$(A $\cap$ C)= A $\cap$ B C]
= n(A) + n(B) + n(C) – n(A $\cap$B) n(B $\cap$ C) – n(A $\cap$ C) + n(A $\cap$ B $\cap$ C)
Dividing both sides by n(S), we get

$\frac{n\left(A\cup B\cup C\right)}{n\left(S\right)}=\frac{n\left(A\right)}{n\left(S\right)}+\frac{n\left(B\right)}{n\left(S\right)}+\frac{n\left(C\right)}{n\left(S\right)}–\frac{n\left(A\cap B\right)}{n\left(S\right)}–\frac{n\left(B\cap C\right)}{n\left(S\right)}–\frac{n\left(A\cap C\right)}{n\left(S\right)}+\frac{n\left(A\cap B\cap C\right)}{n\left(S\right)}$

$\therefore$P( A $\cup$ B $\cup$ C) = P(A) + P(B) + P(C) – P(A $\cap$ B) – P(B C) – P(A $\cap$ C) + P( A $\cap$ B $\cap$C)
Note:
(1) If A, B, C are mutually exclusive events, then,
(A $\cap$ B) = $\mathrm{\varphi }$, (B C) = $\mathrm{\varphi }$, (A $\cap$C) =$\mathrm{\varphi }$, (A B $\cap$ C)) = f
$\therefore$ P(A $\cup$ B $\cup$C) = P(A) + P(B) + P(C)
(2) If A and B are any two events, then
(A – B) $\cap$ (A $\cap$B) = $\mathrm{\varphi }$
and A = (A – B) $\cup$ (A $\cap$ B)
P(A) = P (A – B) + (A $\cap$ B)
= P(A $\cap$ ${B}^{‘}$) +P (A $\cap$ B) [ $\because$ A – B = A $\cap$ ${B}^{‘}$]
or P(A) – P (A $\cap$ B) = P( B – A) = P(A $\cap$ ${B}^{‘}$)
Similarly P(B) ) – P (A $\cap$ B) = P( B – A) = P(B $\cap$ ${A}^{‘}$)
GENERAL FORM OF ADDITION THEOREM OF PROBABILITY
P(${A}_{1}$ $\cup$ ${A}_{2}$ $\cup$${A}_{n}$) =
$\sum _{i=1}^{n}\mathrm{P}\left({\mathrm{A}}_{\mathrm{i}}\right)–\sum _{\mathrm{i}<\mathrm{j}}\mathrm{P}\left({\mathrm{A}}_{\mathrm{i}}\cap {\mathrm{A}}_{\mathrm{j}}\right)+\sum _{i