MCom I Semester Statistical Analysis Probability Study Material notes

//

MCom I Semester Statistical Analysis Probability Study Material notes

MCom I Semester Statistical Analysis Probability Study Material notes: Historical Development of the Theory of Probability Meaning and Definitions of Probability Different Approaches to the Theory of Probability Subjective or Personalistic Approach to Probability Some Important Techniques for Counting Fundamental Rule of Counting Factorial Notations Formulae Relating to Permutation and Combination numerical Illustration Based Permutations Combinations :

Probability Study Material notes
Probability Study Material notes

MCom I Semester Statistical Analysis Chi-Square Test Coefficient Contingency Study Material Notes

Probability

The word ‘probability’ or ‘chance’ is very commonly used in day-to-day conversation and generally people have a rough idea about its meaning. For example, we come across statements like “Probably it may rain tomorrow”, “it is likely that Mr. X may not come for taking his class today”, “the chances of teams A and B winning a certain match are equal”, “probably you are right”, “it is possible that I may not be able to join you at the tea party”. All these terms-possible, probable, likely, etc. -convey the same sense, i.e., the event is not certain to take place or, in other words, there are uncertainty about happening of the event in question.

A numerical measure of uncertainty is provided by a very important branch of Statistics called the “Theory of Probability”. In the words of Prof. Ya-Lin-Chou “Statistics is the science of decision making with calculated risks in the face of uncertainty”.

Historical Development of the Theory of Probability

The theory of probability has its origin in the games of chance related to gambling, for instance, throwing of dice or coin, drawing cards from a pack of cards and so on. Historically, this theory originated in the 17th century.

The Italian mathematician Jerome Cardan was the first man to write a book on the subject Probability in 1501-1576. The book was published after his death in 1663 with the title “Book on Games of Chance”. The book contained rules by which the risks of gambling can be minimised. In mid-seventeenth century the French mathematician Blaise Pascal (1623–62) and Pierre de Fermat (1601-65) laid the systematic foundation to the mathematical theory of probability.

The major contribution to the theory of probability is by Swiss mathematician James Bernoulli (1654-1705) who made extensive study of the subject for twenty years. His book “Treatise on Probability” was published in 1713.

The other contributors to the theory of probability are:

(1) De-Moivre (1667-1754) published his work in his famous book “The Doctrines of Chances” in 1718.

(2) Thomas Bayes (1702-61) introduced the concept of Inverse Probability.

(3) The French mathematician Pierre-Simer de Laplace (1749–1827) published his monumental work “Theory of Analytical Probability” in 1812.

(4) R.A. Fisher, Von-Mises intoduced the empirical approach to the theory of probability.

Today, the theory of probability has been developed to a great extent and there is not even a single discipline in social, physical or natural sciences where probability theory is not used. It is extensively used in the quantitative analysis of business and economic problems.

Probability Study Material notes

Importance and Application of the Theory of Probability

In the beginning, the probability theory was successfully applied at the gambling tables. Gradually, it was applied in the solution of social, economic, political and business problems. In fact, it has become an indispensable tool for all types of formal studies that involve uncertainty.

Highlighting the importance of probability theory, Ya-lun Chou has beautifuly pointed out that statistics, as a method of decision-making under uncertainty, is founded on probability theory, since probability is at once the language and the measure of uncertainty and the risks associated with it. Before learning statistical decision procedures, the reader must acquire an understanding of probability theory.

“Probability theory is of interest not only to card and dice players who were its god-fathers but also to all men of action, heads of industries or heads of armies whose success depends on decisions.”

-Emile Borel “Probabilistic reasoning is used in such various fields as gambling, insurance, theoretical physics, biology, economics and many others.” Croxton and Cowden

Thus, the theory of probability has wide application in different fields of life and major fields are :

(1) Basis of Statistical Laws : The fundamental Laws of Statistics viz., the ‘Law of Statistical Regularity and the ‘Law of Inertia of Large Numbers’ are based on the theory of probability.

(2) Importance in Games of Chance : The very inception of the theory of probability has been to unlock the intricacies of betting and games. It is now possible to decide whether it is worthwhile betting at a game. This is done by calculating the expected value of an action with likely return on each of the outcome, given the chances of success attached to each outcome. Thus, it is very helpful to forsee the uncertainties of betting and chances of success.

(3) Use in Economic and Business Decisions : Knowledge of probabilistic methods has become increasingly essential in quantitative analysis of business and economic problems. In particular, probability theory is a basic component of the formal theory of decision-making under uncertainty. A thorough understanding of the fundamentals of probability theory will permit a businessman to deal with uncertainty in business situations in such a way that he can assess systematically the risks involved in each alternative, and consequently act to minimize risks.

(4) Basis of Statistical Decision Theory: The probability theory is a base for the ‘Statistical Decision Theory’. Decision theories are based on fundamental laws of probability and expected value. The empirical probability concept, based on experimental tests, provides scope for the application of probability in the real life situations.

(5) Basis of Test of Hypothesis and Tests of Significance : Various parametric and non-parametric tests like Z-test, t-test, F-test etc., are based on the theory of probability.

(6) Use in Sampling Method : The statistical inference about the size and the character of the population on the basis of samples is based on the likelihood that the samples drawn randomly shall bear the characteristics of the population from which they are drawn.

The estimation of the population parameters on the basis of sample statistics is also based on the same logic.

In view of these applications, it is proved that theory of probability has both theoretical and practical significance in different aspects of life.

Probability Study Material notes

Meaning and Definitions of Probability

It is difficult to give a generally accepted definition of probability. In simple words “probability means the number of occasions that a particular event is likely to occur in a large population of events’. The particular event may be expressed positively where the event is likely to happen or negatively where the event is not likely to happen.

Such chance of happening or not happening is considered with reference to a series of events or the whole of a ‘population. But, in accordance with the Law of Statistical Regularity and the Law of Inertia of Large Numbers we may have a large sample or several small samples so that they resemble quite closely to the population. Therefore, it is not necessary that chance of occurrence of an event is always with reference to the whole population.

The probability is based on certain laws of nature which undergo changes with the passage of time. There are four approaches to the theory of probability which have defined probability in their own ways. These approaches are :

(1) Classical Approach

(2) Relative Frequency Approach

(3) Subjective or Personality Approach

(4) Axiomatic Approach

Before discussing the various approaches of probability, let us have an understanding of the following terms:

(1) Simple Experiment or Trial : The term experiment refers to processes which result in different possible outcomes or observations.

(2) Outcome : Output of an experiment is called outcome. The number of outcomes depends upon the nature of the experiment and may be finite or infinite. For example, consider the experiment of hitting a particular target by a marksman. There is only two outcomes, either hit or miss.

(3) Random Experiment : If in an experiment all the possible outcomes are known in advance and none of the outcomes can be predicted with certainty, then such an experiment is called a random experiment. For example, while tossing a coin, it can be specified that either head or tail will turn up, but we are not sure whether the outcome of a particular toss will be head or tail.

(4) Sample Space: A set of all possible, equally likely outcomes of a random experiment is known as the sample space and is denoted by ‘S’. For example, (i) when a coin is tossed the sample space is S = (

H T). (ii) when two coins are tossed at a time or one after the other the sample

space is S = (HH, HT, TH, TT), (iii) S = (1, 2, 3, 4, 5, 6) is a sample space of rolling a die.

(5) Events : A single outcome or a group of outcomes constitutes an event. Events are gradually denoted by capital letters A, B, C etc. Events can be of following types :

Simple and Compound Events : In case of simple events we consider the probability of the happening or not happening of single event. For example, we might be interested in finding out the probability of drawing a white ball from a containing 8 white and 7 red balls.  On the other hand, in case of compound vents we consider the joint occurrence of two or more events. For example, if a vag contains 8 white and 7 red balls if two successive draws of 2 balls are made, we shall be finding out the probability of getting 2 white balls in the first draw and 2 red balls in the second draw-we are thus dealing with a compound event.

(ii) Independent Events : Events are said to be independent of each other in happening of any one of them is not affected by the happening of any one of others For example, if two cards are drawn from a well shuffled pack of 52 cards one after the other with replacement, then getting an ace in the first draw is independent to getting a king in the second draw.

(iii) Dependent Events : Dependent events are those in which the occurrence or non-occurrence of one event in any one trial affects the probability of other events in other trials. For example, if a card is drawn from a pack of playing cards and is not replaced, this will alter the probability of the second card. Similarly, the probability of drawing a queen from a pack of 52 cards is 4/52 or 1/13. But if the card drawn (queen) is not replaced in the pack, the probability of drawing again a queen is 3/51 (since the pack now contains only 51 cards out of which there are 3 queens).

(iv) Mutually Exclusive Events : Two or more events are called mutually exclusive if the happening of any one of them excludes the happening of all others in the same experiment. Thus, in a game of tossing the coin, at one time, we can either get only head or only tail. Thus, ‘Head’ or ‘Tail’ are mutually exclusive events in this experiment. Similarly, while throwing a dice with six faces, events of getting any one face (ie., 1, 2, 3, 4, 5 or 6) will be mutually exclusive as at one time only once face can result out of the throw.

(v) Equally Likely Events : Events are equally likely if there is no reason for an event to occur in preference to any other event. For example, when an unbiased coin is rolled then the outcomes, head and tail, are equally likely.

(vi) Exhaustive Events : Exhaustive events are those events which include all possible outcomes of an experiment. Thus, in toss of a single coin, we can get head (H) or tail (T). Hence, exhaustive number of cases is 2, viz., (HT). If two coins are tossed, the various possibilities are HH, HT, TH, TT (where HT means head on the first coin and tail on the second coin and TH means tail on the first coin and head on the second coin and so on). Thus, in case of toss of two coins, exhaustive number of cases is 4, i.e., 22. Similarly, in a toss of three coins the possible number of outcomes is :

HHH, HTH, THH, TTH, HHT, HTT, THT, TTT Therefore, in case of toss of 3 coins the exhaustive number of cases is 8. In general, in a throw of n coins, the exhaustive number of cases is 2″. In the case of a throw of two dices exhaustive number of events is 6 = 36. Thus, for a throw of 3 dices exhaustive number of cases will be 63 = 216, and for n dices they will be 6.

Probability Study Material notes

Different Approaches to the Theory of Probability

(1) Classical Approach (Priori Probability)

The classical approach is the oldest method of measuring probabilities and its origin is gambling games. The classical definition of probability does not require the actual experimentation, i.e., no experimental data are needed for its computation nor it is based on previous experience. It enables us to obtain probability by logical  Reasoning even without conducting  actual trials and hence it is kno or mathematical probability.  conducting actual trials and hence it is known as a ‘priori Under this approach, it is assumed that each outcome of an experiment is equal probability is assigned to each outcome. Thus, if there de only two outcomes in a random experiment, then the probability of each outcome w C U.D. For example, tossing coin there are only two outcomes T.e., need up or tail up. Thus getting a head up is the same as the probability of getting a tall up and is equal to 0.5. In case of a die rolled once, any one of the six possible outcomes i.e., 1, 2, 3, 4, 5, 6 can occur.

The term “equally likely” conveys that each outcome of an experiment has the same chance of appearing as any other.

According to this approach, the probability is the ratio of favourable events to the total number of equally likely events.

According to Laplace, the probability of the happening of anyone of the several equally likely events is the ratio of the number of cases favourable to it to the total number of possible cases”.

It is customary to describe the probability of one event as ‘p’ (success) and of the other event as ‘q’ (failure) as there is no third event.

Number of Favourable Cases

Total Number of Equally Likely Cases Probability, therefore, may be written as a ratio. The numerator of the fraction corresponding to this ratio represents the number of successful to unsuccessful) outcomes, while the denominator represents the total number of possible outcomes.

If an event can occur in ‘m’ ways and fail to occur in ‘n’ ways and these are equally to occur, then the probability of the event occuring, is denoted by p. Such probabilities are also known as unitary or theoretical or mathematical probability. p is the probability of the event happening and q is the probability of its not happening.

m

and = m +n

* m +n

n mun Hence, P + 9 m nm + n mun Therefore, p +9 = 1, 1-q=p. 1-p=4

Probabilities can be expressed either as ratio, fraction or percentage, such as 1/2 or 0.5 or 50%.

Thus, for calculating probability, we have to find out two things : (1) Number of favourable cases. (2) Total number of equally likely cases.

For example, if a coin is tossed, there are two equally likely results, a head or a tail, hence the probability of a head is 1/2.

Similarly, if a die is thrown, the probability of obtaining an even number is 1/2 since three of the six equally possible results are even numbers.

Note: (i) For the application of classical definition of probability, the possible outcomes of a random experiment must satisfy the 3 criteria, viz., ‘mutually exclusive exhaustive’ and ‘equally likely’. If any of them does not hold, this definition fails

 (11) The numerator m does not embrace any favorable case, but is restricted to only those favorable cases which are included in the list of possible outcomes of the random experiment.

Probability Study Material notes

Limitations of Classical Approach :

(1) This definition is confined to the problems of games of chance only and cannot explain the problem other than the games of chance. Using! this definition, we cannot, for example, find the probability that an Indian aged 25 will die before reaching the age of 50 (such probabilities are required to be calculated for fixing the premium rates in life insurance.

(2) We cannot apply this method, when the total number of cases cannot be calculated

(3) When the outcomes of a random experiment are not equally likely, this method cannot be applied. For example, if a person jumps from the top of Quota Miner, then the probability of his survival will not be 50%, since in this case, the two mutually exclusive and exhaustive outcomes, viz., survival and death are not equally likely.

(4) It is difficult to subdivide the possible outcome of experiment into mutually exclusive, exhaustive and equally likely in most cases.

(II) Relative Frequency Approach or Empirical or Statistical Probability

In many situations classical theory fails to explain the probability of event where there are many alternatives available out of which one probability is to be determined. According to this approach, the happening of an event is determined on the basis of past experience or on the basis of relative frequency of success in the past. Since in the relative frequency approach, the probability is obtained objectively by repetitive empirical observations, it is also known Empirical or Statistical or Posteriori probability. For example, if a teacher is asked to give important question for the coming exams, his probability is based on the past experience of papers. If a machine produces 100 articles in the past, 2 articles were found to be defective, then the probability of the defective articles is 2/100 or 2%.

According to empirical concept, the probability of an event ordinarily represents the proportion of times, under identical circumstances, the outcome can be expected to occur. The value refers to the event’s long-run frequency of occurrence. The main assumptions are :

(i) The experiments or observations are random. As there is no bias in favor of any outcome, all elements enjoy equal chance of selection.

(ii) There are a large number of observations. It is only when these two assumptions are satisfied that the relative frequency becomes stable since it is subject to the law of statistical regularity. This aspect is clearly brought out in the following definitions :

“If the experiment be repeated a large number of times under essentially identical conditions, the limiting value of the ratio of the number of times the event A happens to the total number of trials of the experiments as the number of trials increases indefinitely, is called the probability of the occurrence of A”.

-Von Mises “Empirical or Statistical Probability is the limit of the relative frequency of successes in an infinite sequence of trials where all trials have been performed under essentially the same conditions.”

Probability Study Material notes

The empirical probability of an event is taken as the relative frequency occurrence of the event when the number of observations is very large. The probably itself is the limit of the relative frequency as the number of observations increase indefinitely

-Murray R. Spiegel If an event has occurred m times in the way described as ‘success in a series of n independent trials, all made under the same essential conditions, the ratio minis called the relative frequency of success. The limit of min as n tends to infinity, is the probability of success in a single trial.”

-Kenney and Keeping Suppose that an event A occurs m times in N repetitions of a random experiment. Then the ratio gives the relative frequency of the event A and it will not vary appreciably from one trial to another. In the limiting case when N becomes sufficiently large, it more or less settles to a number which is called the probability of A. Symbolically,

It may be noted that we can never obtain the probability of an event as given by the above limit. In practice, we can only try to have a close estimate of p(E) based on large n. The Posteriori probability or Empirical probability of an event is expressed as:

p=”- Relative Frequency

Number of the items Note : The empirical probability provides validity to the classical theory of probability. If an unbiased coin is tossed at random, then the classical probability gives the probability of a head as 1/2. Thus, if we toss an unbiased coin 20 times, then classical probability suggests we should have 10 heads. However, in practice, this will not generally be true. In fact, in 20 throws of a coin, we may get no head at all or 1 or 2 heads. However, the empirical probability suggests that if a coin is tossed a large number of times, say 500 times, we should on an average expect 50% heads and 50% tails. Thus, the empirical probability approaches the classical probability as the number of trials becomes indefinitely large.

Probability Study Material notes

Limitations of Relative Frequency Theory of Probability :

(1) The experimental conditions may not remain essentially homogeneous and identical in a large number of repetitions of the experiment.

(2) The relative frequency , may not attain a unique value, however, large N may be.

(3) Probability P(A) defined can never be obtained in practice. We can only attempt to a close estimate of P(A) by making N sufficiently large.

Probability Study Material notes

(III) Subjective or Personality Approach to Probability

The personality theory of probability is also known as ‘Subjective Theory of Probability. This theory is commonly used in business decision-making.

The subjective approach to assigning probabilities was introduced by Frank Ramsey in 1926. The concept was further developed by Bernard Koopman, Richard Good and Leonard Savage.

in the subjective or personal interpretation of probability, a probability is interpreted measure of degree of belief, or as the qualified judgement of a particular individual”

-Winkler and Hays Thus, the personal or subjective concept of probability measures the confidence that an individual has in the truth of a particular proposition. It is bound to vary with person to person and is, therefore, called a subjective measure of probability. This concept has a considerable role to play particularly when there is neither any a priori laws of nature to guide nor the experiments can be repeatedly performed to establish the chance of occurrence. For example, if a lecturer wants to find out the probability of Mr. Y topping B.Com. Examination, he may assign a value between zero and one according to his degree of belief for possible occurrence. Based on past performance and other opinions, he may arrive at a probability figure. This concept of probability is also used during war where every personal approach varies from individual instinct to instinct. This approach is the most flexible as compared to other approaches and requires careful analysis while its application.

Probability Study Material notes

chetansati

Admin

https://gurujionlinestudy.com

Leave a Reply

Your email address will not be published.

Previous Story

MCom I Semester Statistical Analysis Chi Square Test Coefficient Contingency Study Material Notes

Next Story

MCom I Semester Statistical Analysis Probability Study Material notes ( Part 2 )

Latest from MCom I Semester Statistical Analysis