Random events are the classical and statistical definition of probability. Classical, statistical and geometric definitions of probability

Kendall's rank correlation indicator, testing the corresponding hypothesis about the significance of the relationship.

2.Classical definition of probability. Probability properties.
Probability is one of the basic concepts of probability theory. There are several definitions of this concept. Let us give a definition that is called classical. Next, we point out the weaknesses of this definition and give other definitions that make it possible to overcome the shortcomings of the classical definition.

Consider an example. Let an urn contain 6 identical, thoroughly mixed balls, 2 of them red, 3 blue and 1 white. Obviously, the possibility of drawing a colored (i.e., red or blue) ball at random from an urn is greater than the possibility of drawing a white ball. Can this opportunity be characterized by a number? It turns out you can. This number is called the probability of an event (the appearance of a colored ball). Thus, the probability is a number that characterizes the degree of possibility of the occurrence of an event.

Let us set ourselves the task of giving a quantitative estimate of the possibility that a ball taken at random is colored. The appearance of a colored ball will be considered as event A. Each of the possible results of the test (the test consists in extracting a ball from the urn) will be called elementary outcome (elementary event). Denote elementary outcomes by w 1 , w 2 , w 3 , etc. In our example, the following 6 elementary outcomes are possible: w 1 - a white ball has appeared; w 2 , w 3 - a red ball appeared; w 4 , w 5 , w 6 - a blue ball has appeared. It is easy to see that these outcomes form a complete group in pairs incompatible events(only one ball will necessarily appear) and they are equally probable (the ball is taken out at random, the balls are the same and thoroughly mixed).

Those elementary outcomes in which the event of interest to us occurs, we will call favorable this event. In our example, the following 5 outcomes favor event A (appearance of a colored ball): w 2 , w 3 , w 4 , w 5 , w 6 .

Thus, event A is observed if one of the elementary outcomes favoring A occurs in the trial, no matter which one; in our example, A is observed if w 2 or w 3 or w 4 or w 5 or w 6 occurs. In this sense, event A is subdivided into several elementary events (w 2 , w 3 , w 4 , w 5 , w 6 ); the elementary event is not subdivided into other events. This is the difference between event A and elementary event (elementary outcome).

The ratio of the number of elementary outcomes favorable to the event A to their total number is called the probability of the event A and denoted by P (A). In the example under consideration, there are 6 elementary outcomes; of these, 5 favor event A. Therefore, the probability that the drawn ball will be colored is equal to P (A) \u003d 5 / 6. This number gives the quantitative estimate of the degree of possibility of the appearance of a colored ball that we wanted to find. We now give the definition of probability.



Probability of event A is the ratio of the number of outcomes favorable to this event to the total number of all equally possible incompatible elementary outcomes that form a complete group. So, the probability of event A is determined by the formula

where m is the number of elementary outcomes favoring A; n is the number of all possible elementary test outcomes.

It is assumed here that the elementary outcomes are incompatible, equally possible, and form a complete group. The following properties follow from the definition of probability:

With in about y with t in about 1. The probability of a certain event is equal to one.

Indeed, if the event is reliable, then each elementary outcome of the test favors the event. In this case, m = n, therefore,

P(A)=m/n=n/n=1.

With in about y with t in about 2. The probability of an impossible event is zero.

Indeed, if the event is impossible, then none of the elementary outcomes of the trial favors the event. In this case, m = 0, therefore,

P (A) \u003d m / n \u003d 0 / n \u003d 0.

With in about y with t in about 3. The probability of a random event is a positive number between zero and one.

Indeed, only a part of the total number of elementary outcomes of the test favors a random event. In this case 0< m < n, значит, 0 < m / n < 1, следовательно,

0 < Р (А) < 1

So, the probability of any event satisfies the double inequality

Remark. Modern rigorous courses in probability theory are built on a set-theoretic basis. We confine ourselves to the presentation in the language of set theory of those concepts that were considered above.

Let one and only one of the events w i , (i = 1, 2, ..., n) occur as a result of the test. Events w i are called elementary events (elementary outcomes). It already follows from this that elementary events are pairwise incompatible. The set of all elementary events that can appear in a trial is called elementary event space W, and the elementary events themselves - points of space W.

Event A is identified with a subset (of space W) whose elements are elementary outcomes favoring A; event B is a subset of W whose elements are outcomes favorable to B, and so on. Thus, the set of all events that can occur in a trial is the set of all subsets W. W itself occurs with any outcome of the trial, so W is a certain event; an empty subset of the space W is an impossible event (it does not occur for any outcome of the test).

Note that elementary events are distinguished from all events by the fact that each of them contains only one element W.

Each elementary outcome w i is assigned a positive number p i is the probability of this outcome, and

By definition, the probability P(A) of an event A is equal to the sum of the probabilities of elementary outcomes favoring A. From this it is easy to obtain that the probability of an event that is reliable is equal to one, impossible is zero, and arbitrary is between zero and one.

Consider an important special case when all outcomes are equally likely. The number of outcomes is n, the sum of the probabilities of all outcomes is equal to one; hence the probability of each outcome is 1/n. Let event A be favored by m outcomes. The probability of event A is equal to the sum of the probabilities of outcomes favoring A:

P(A) = 1 / n + 1 / n + .. + 1 / n.

Considering that the number of terms is equal to m, we have

P (A) \u003d m / n.

The classical definition of probability is obtained.

The construction of a logically complete probability theory is based on the axiomatic definition of a random event and its probability. In the system of axioms proposed by A. N. Kolmogorov, the elementary event and probability are indefinable concepts. Here are the axioms that define the probability:

1. Each event A is assigned a non-negative real number P(A). This number is called the probability of event A.

2. The probability of a certain event is equal to one:

3. The probability of occurrence of at least one of the pairwise incompatible events is equal to the sum of the probabilities of these events.

Based on these axioms, the properties of probabilities and the relationships between them are derived as theorems.

3. Static definition of probability, relative frequency.

The classical definition does not require an experiment. While real applied problems have an infinite number of outcomes, and the classical definition in this case cannot give an answer. Therefore, in such problems we will use static determination of probabilities, which is calculated after the experiment or experiment.

static probability w(A) or relative frequency is the ratio of the number of outcomes favorable to a given event to the total number of actually conducted trials.

w(A)=nm

The relative frequency of an event has stability property:

lim n→∞P(∣ ∣ nmp∣ ∣ <ε)=1 (свойство устойчивости относительной частоты)

4.Geometric probabilities.

At geometric approach to definition probabilities an arbitrary set is considered as the space of elementary events finite Lebesgue measure on the line, plane or space. Events are called all sorts of measurable subsets of the set.

Probability of event A is determined by the formula

where denotes the Lebesgue measure of the set A. With this definition of events and probabilities, all A.N.Kolmogorov's axioms are fulfilled.

In specific tasks that are reduced to the above probabilistic scheme, the test is interpreted as a random selection of a point in some area, and the event A– as hit of the chosen point in some sub-region A of the region. This requires that all points in the region have the same opportunity to be selected. This requirement is usually expressed in terms "at random", "randomly", etc.

The randomness of the occurrence of events is associated with the impossibility of predicting in advance the outcome of a particular test. However, if we consider, for example, the test: multiple tossing of a coin, ω 1 , ω 2 , … , ω n , then it turns out that in approximately half of the outcomes ( n / 2) a certain pattern is found that corresponds to the concept of probability.

Under probability events A some numerical characteristic of the possibility of the occurrence of an event is understood A. We denote this numerical characteristic R(A). There are several approaches to determining probability. The main ones are statistical, classical and geometric.

Let produced n tests and at the same time some event A came n A times. Number n A is called absolute frequency(or just the frequency) of the event A, and the relation is called the relative frequency of occurrence of event A. Relative frequency of any event characterized by the following properties:

The basis for applying the methods of probability theory to the study of real processes is the objective existence of random events that have the property of frequency stability. Numerous trials of the event under study A show that for large n relative frequency ( A) remains approximately constant.

The statistical definition of probability lies in the fact that the probability of an event A is taken to be a constant value p(A), around which the values ​​of the relative frequencies fluctuate (A) with an unlimited increase in the number of trialsn.

Remark 1. Note that the limits of change in the probability of a random event from zero to one are chosen by B. Pascal for the convenience of its calculation and application. In correspondence with P. Fermat, Pascal pointed out that any interval could be chosen as the indicated interval, for example, from zero to one hundred and other intervals. In the problems below in this tutorial, the probabilities are sometimes given as percentages, i.e. from zero to one hundred. In this case, the percentages given in the tasks must be converted into shares, i.e. divide by 100.

Example 1 Conducted 10 series of coin tosses, 1000 tosses in each. Value ( A) in each of the series is 0.501; 0.485; 0.509; 0.536; 0.485; 0.488; 0.500; 0.497; 0.494; 0.484. These frequencies cluster around R(A) = 0,5.

This example confirms that the relative frequency ( A) is approximately equal to R(A), i.e.

Classical and statistical definition of probability. geometric probability.

The basic concept of probability theory is the concept of a random event. A random event is an event that, under certain conditions, may or may not occur. For example, hitting or missing an object when firing at this object with a given weapon is a random event.

An event is called certain if, as a result of the test, it necessarily occurs. An impossible event is an event that, as a result of a test, cannot occur.

Random events are said to be inconsistent in a given trial if no two of them can appear together.

Random events form a complete group if, on each trial, any one of them can appear and no other event cannot appear that is incompatible with them.

Consider the complete group of equally possible incompatible random events. Such events will be called outcomes. An outcome is called favorable to the occurrence of event A if the occurrence of this event entails the occurrence of event A.

The probability of an event A is the ratio of the number m of outcomes favorable to this event to the total number n of all equally possible incompatible elementary outcomes that form a complete group

Geometric probability is one way of specifying probability; let Ω be a bounded set of Euclidean space with volume λ(Ω) (respectively, length or area in a one-dimensional or two-dimensional situation), let ω be a point taken randomly from Ω, let the probability that a point be taken from a subset be proportional to its volume λ (x), then the geometric probability of a subset is defined as the ratio of volumes: The geometric definition of probability is often used in Monte Carlo methods, for example, to approximate the values ​​of multiple definite integrals.

Theorems of addition and multiplication of probabilities

Theorems of addition and multiplication of probabilities

The sum of two events A and B is the event C, which consists in the occurrence of at least one of the events A or B.

Addition theorem

The probability of the sum of two incompatible events is equal to the sum of the probabilities of these events:

P (A + B) = P (A) + P (B).

In the case when events A and B are joint, the ver-th of their sum is expressed by the formula

P (A + B) \u003d P (A) + P (B) - P (AB),

where AB is the product of events A and B.

Two events are said to be dependent if the probability of one of them depends on the occurrence or non-occurrence of the other. in the case of dependent events, the concept of the conditional probability of an event is introduced.

The conditional probability P(A/B) of event A is the probability of event A calculated assuming that event B has occurred. Similarly, P(B/A) denotes the conditional probability of an event B, provided that the event A has occurred.

The product of two events A and B is the event C, which consists in the joint occurrence of the event A and the event B.

Probability multiplication theorem

The probability of the product of two events is equal to the ver-ty of one of them, multiplied by the conditional probability of the other in the presence of the first:

P (AB) \u003d P (A) P (B / A), or P (AB) \u003d P (B) P (A / B).

Consequence. The probability of the joint occurrence of two independent events A and B is equal to the product of the probabilities of these events:

P (AB) \u003d P (A) P (B).

Consequence. In the case of n identical independent trials, in each of which event A appears with probability p, the probability of occurrence of event A at least once is equal to 1 - (1 - p)n

The probability that at least one event will occur. Example. Bayes formula.

The probability of making at least one mistake on a notebook page is p=0.1. There are 7 written pages in the notebook. What is the probability P that there is at least one error in the notebook?

The probability of occurrence of an event A, consisting of events A1, A2, ..., An, independent in the aggregate, is equal to the difference between unity and the product of the probabilities of opposite events Ǡ1, Ǡ2, ... Ǡn.

P(A) = 1 - q1q2…qn

Probability of the opposite event q = 1 - p.

In particular, if all events have the same probability equal to p, then the probability of the occurrence of at least one of these events is equal to:

P(A) = 1 - qn = 1 - (1 - p)n = 1 - (1 - 0.1)7 = 0.522

Answer: 0.522

Bayes formula.

Let us assume that some experiment is being carried out, and about the conditions for its conduct, n unique and incompatible hypotheses can be stated with probabilities Let the event A occur or not occur as a result of the experiment, and it is known that if the experiment occurs when the hypothesis is fulfilled, then probabilities of hypotheses, if it became known that event A happened? In other words, we are interested in the probabilities. Based on relations (4) and (5), we have whence But according to the total probability formula, therefore Formula (12) is called the Bayes formula*.

6. Bernoulli formula. Examples.

The Bernoulli formula is a formula in probability theory that allows you to find the probability of an event A occurring in independent trials. The Bernoulli formula allows you to get rid of a large number of calculations - addition and multiplication of probabilities - with a sufficiently large number of tests. Named after the outstanding Swiss mathematician Jacob Bernoulli, who developed the formula.

Wording

Theorem: If the Probability p of the occurrence of the event Α in each trial is constant, then the probability that the event A will occur k times in n independent trials is equal to: where. .

Proof

Since, as a result of independent tests carried out under the same conditions, an event occurs with a probability , therefore, the opposite event with a probability Let us designate the occurrence of an event in the test with a number. Since the conditions for conducting the experiments are the same, these probabilities are equal. Let the event occur once as a result of the experiments, then the rest of the times this event does not occur. An event can appear once in tests in various combinations, the number of which is equal to the number of combinations of elements by. This number of combinations is found by the formula: The probability of each combination is equal to the product of the probabilities: Applying the addition theorem for the probabilities of incompatible events, we obtain the final Bernoulli formula:

Local and integral theorems of Laplace. Examples.

Local and integral Laplace theorems

Local Laplace theorem. The probability that in n independent trials, in each of which the probability of occurrence of an event is equal to p(0< р < 1), событие наступит ровно k раз (безразлично, в какой последовательности), приближенно равна (тем точнее, чем больше n)
To determine the values ​​of φ(x), you can use a special table.

Integral theorem of Laplace. The probability that in n independent trials, in each of which the probability of occurrence of an event is equal to p (0< р < 1), событие наступит не менее k1 раз и не более k2 раз, приближенно равна

P(k1;k2)=Φ(x"") - Φ(x")

Here -Laplace function The values ​​of the Laplace function are found according to a special table.

Example. Find the probability that event A occurs exactly 70 times in 243 trials if the probability of this event occurring in each trial is 0.25.

Solution. By condition, n=243; k = 70; p=0.25; q= 0.75. Since n=243 is a fairly large number, we use the local Laplace theorem: where x = (k-np)/ √npq.

Find the value of x According to the table n, we find f (1.37) \u003d 0.1561. Desired probability

P(243)(70) = 1/6.75*0.1561 = 0.0231.

Numerical characteristics of discrete quantities. Examples

Numerical characteristics of discrete random variables

The distribution law fully characterizes the random variable. However, when it is impossible to find the distribution law, or this is not required, one can limit oneself to finding values, called numerical characteristics of a random variable. These quantities determine some average value around which the values ​​of a random variable are grouped, and the degree of their dispersion around this average value.

Definition. The mathematical expectation of a discrete random variable is the sum of the products of all possible values ​​of the random variable and their probabilities.

The mathematical expectation exists if the series on the right side of the equality converges absolutely.

From the point of view of probability, we can say that the mathematical expectation is approximately equal to the arithmetic mean of the observed values ​​of the random variable.

theoretical moments. Examples.

The idea of ​​this method is to equate theoretical and empirical moments. Therefore, we begin by discussing these concepts.

Let -- independent sample from distribution dependent on unknown parameter The theoretical moment of the th order is the function where is a random variable with a distribution function . We especially note that the theoretical moment is a function of unknown parameters, as long as the distribution depends on these parameters. We will assume that mathematical expectations exist at least for the empirical moment of the th order is called Note that, by definition, empirical moments are functions of the sample. notice, that is the well-known sample mean.

In order to find estimates of unknown parameters using the method of moments, one should:

explicitly compute the theoretical moments, and construct the following system of equations for the unknown variables

In this system are considered as fixed parameters.

solve system (35) with respect to variables Since the right side of the system depends on the sample, the result will be functions of These are the desired estimates of the parameters by the method of moments.

12. Chebyshev's inequality. The law of large numbers.

The Chebyshev inequality, also known as the Bieneme-Chebyshev inequality, is a common inequality in measure theory and probability theory. It was first obtained by Biename (French) in 1853, and later also by Chebyshev. The inequality used in measure theory is more general; in probability theory, its corollary is used.

Chebyshev's inequality in measure theory

The Chebyshev inequality in measure theory describes the relationship between the Lebesgue integral and the measure. An analogue of this inequality in probability theory is the Markov inequality. Chebyshev's inequality is also used to prove the embedding of a space in a weak space

Wording

Let be a space with a measure. Let also

summable per function

Then the inequality is true:

More generally:

If is a non-negative real measurable function that is non-decreasing on the domain of definition then In terms of space Let Then

Chebyshev's inequality in probability theory

Chebyshev's inequality in probability theory states that a random variable basically takes values ​​close to its mean. More precisely, it gives an estimate of the probability that a random variable will take on a value that is far from its mean. Chebyshev's inequality is a consequence of Markov's inequality.

Wording

Let a random variable be defined on a probability space, and its mathematical expectation and variance be finite. Then where If , where is the standard deviation and , then we get In particular, a random variable with finite variance deviates from the mean by more than standard deviations with a probability less than . It deviates from the mean by standard deviations with a probability less than .

Law of Large Numbers

The basic concepts of probability theory are the concepts of a random event and a random variable. At the same time, it is impossible to predict in advance the result of the test, in which one or another event or some specific value of a random variable may or may not appear, since the outcome of the test depends on many random causes that cannot be taken into account.

However, with repeated repetition of tests, regularities are observed that are characteristic of mass random phenomena. These patterns have the property of stability. The essence of this property is that the specific features of each individual random phenomenon have almost no effect on the average result of a large number of similar phenomena, and the characteristics of random events and random variables observed in tests, with an unlimited increase in the number of tests, become practically non-random.

Let a large series of experiments of the same type be carried out. The outcome of each individual experience is random, indeterminate. However, despite this, the average result of the entire series of experiments loses its random character and becomes regular.

For practice, it is very important to know the conditions under which the cumulative action of very many random causes leads to a result that is almost independent of chance, since it makes it possible to foresee the course of phenomena. These conditions are indicated in the theorems bearing the general name of the law of large numbers.

The law of large numbers should not be understood as any one general law associated with large numbers. The law of large numbers is a generalized name for several theorems, from which it follows that with an unlimited increase in the number of trials, the average values ​​tend to some constants.

These include the Chebyshev and Bernoulli theorems. Chebyshev's theorem is the most general law of large numbers, Bernoulli's theorem is the simplest.

The basis of the proof of theorems, united by the term "law of large numbers", is Chebyshev's inequality, which establishes the probability of deviation from its mathematical expectation:

Mathematical formulation

It is necessary to determine the maximum of the linear objective function (linear form) under the conditions Sometimes a certain set of restrictions in the form of equalities is also imposed on, but you can get rid of them by sequentially expressing one variable through others and substituting it in all other equalities and inequalities (as well as in the function ). Such a problem is called "basic" or "standard" in linear programming.

Geometric method for solving linear programming problems for two variables. Example.

Solution area of ​​a linear inequality with two variables is a half plane. In order to determine which of the two half-planes corresponds to this inequality, you need to bring it to the form or Then the desired half-plane in the first case is located above the line a0 + a1x1 + a2x2 = 0, and in the second - below it. If a2=0, then inequality (8) has the form ; in this case, we get either - the right half-plane, or - the left half-plane.

The solution domain of the system of inequalities is the intersection of a finite number of half-planes described by each individual inequality. This intersection is a polygonal region G. It can be both bounded and unbounded, and even empty (if the system of inequalities is inconsistent).
Rice. 2

The solution domain G has the important convexity property. An area is called convex if any two of its points can be connected by a segment that entirely belongs to this area. On fig. 2 shows a convex region G1 and a non-convex region G2. In the region G1, two of its arbitrary points A1 and B1 can be connected by a segment, all points of which belong to the region G1. In the region G2, one can choose two of its points A2 and B2 such that not all points of the segment A2B2 belong to the region G2.

A reference line is a line that has at least one common point with the region, while the entire region is located on one side of this line. On fig. Figure 2 shows two reference lines l1 and l2, i.e., in this case, the lines pass through the vertex of the polygon and through one of its sides, respectively.

Similarly, one can give a geometric interpretation of a system of inequalities with three variables. In this case, each inequality describes a half-space, and the whole system describes the intersection of half-spaces, i.e., a polyhedron, which also has the convexity property. Here, the reference plane passes through a vertex, edge, or face of the polyhedral region.

Based on the concepts introduced, we consider a geometric method for solving a linear programming problem. Let a linear objective function f = c0 + c1x1 + c2x2 of two independent variables be given, as well as some joint system of linear inequalities describing the domain of solutions G. Among the feasible solutions, it is required to find one for which the linear objective function f takes the smallest value.

Let the function f be equal to some constant value C: f = c0 + c1x1 + c2x2 = C. This value is reached at the points of the line that satisfy the equation. If this line is translated parallel in the positive direction of the normal vector n(c1,c2), the linear function f will increase, and when it is transferred in the opposite direction, it decreases.

Suppose that the straight line written in the form (9) , with a parallel translation in the positive direction of the vector n, first meets the region of feasible solutions G at some of its vertices, while the value of the objective function is equal to C1, and the straight line becomes the reference one. Then the value of C1 will be minimal, since further movement of the straight line in the same direction will increase the value of f.

Thus, the optimization of the linear objective function on the polygon of feasible solutions occurs at the points of intersection of this polygon with the support lines corresponding to the given objective function. In this case, the intersection can be at one point (at the vertex of the polygon) or at an infinite number of points (on the edge of the polygon).

Simplex method algorithm for solving a general linear programming problem. Table.

Solution algorithms

The most famous and widely used in practice for solving the general problem of linear programming (LP) is the simplex method. Despite the fact that the simplex method is a fairly efficient algorithm that has shown good results in solving applied LP problems, it is an algorithm with exponential complexity. The reason for this is the combinatorial nature of the simplex method, which successively enumerates the vertices of the polyhedron of admissible solutions while searching for the optimal solution.

The first polynomial algorithm, the ellipsoid method, was proposed in 1979 by the Soviet mathematician L. Khachiyan, thus solving a problem that had remained unsolved for a long time. The ellipsoid method has a completely different, non-combinatorial nature than the simplex method. However, in computational terms, this method turned out to be unpromising. Nevertheless, the very fact of the polynomial complexity of the problems led to the creation of a whole class of efficient LP algorithms - interior point methods, the first of which was N. Karmarkar's algorithm proposed in 1984. Algorithms of this type use a continuous interpretation of the LP problem, when instead of enumerating the vertices of the polytope of solutions to the LP problem, a search is made along trajectories in the space of variables of the problem that do not pass through the vertices of the polytope. The interior point method, which, unlike the simplex method, bypasses points from the interior of the tolerance range, uses non-linear programming logarithmic barrier function methods developed in the 1960s by Fiacco and McCormick.

24. Special cases in the simplex method: degenerate solution, infinite set of solutions, no solution. Examples.

Using the artificial basis method to solve a general linear programming problem. Example.

Artificial basis method.

The artificial basis method is used to find an acceptable basic solution to a linear programming problem when there are equality-type constraints in the condition. Consider the problem:

max(F(x)=∑cixi|∑ajixi=bj, j=1,m; xi≥0).

The so-called "artificial variables" Rj are introduced into the constraints and into the goal function as follows:

∑ajix+Rj=bj, j=1,m;F(x)=∑cixi-M∑Rj

When artificial variables are introduced in the artificial basis method, a sufficiently large coefficient M is assigned to the objective function, which has the meaning of a penalty for introducing artificial variables. In the case of minimization, artificial variables are added to the goal function with a coefficient M. The introduction of artificial variables is permissible if, in the process of solving the problem, they consistently vanish.

The simplex table, which is compiled during the solution process using the artificial basis method, is called extended. It differs from the usual one in that it contains two lines for the goal function: one for the component F = ∑cixi, and the other for the component M ∑Rj Let's consider the procedure for solving the problem using a specific example.

Example 1. Find the maximum of the function F(x) = -x1 + 2x2 - x3 under the restrictions:

x1≥0, x2≥0, x3≥0 .

We apply the method of artificial basis. Let us introduce artificial variables into the constraints of the problem

2x1 + 3x2 + x3 + R1 = 3;

x1 + 3x3 + R2 = 2 ;

Goal function F(x)-M ∑Rj= -x1 + 2x2 - x3 - M(R1+R2).

Let's express the sum R1 + R2 from the constraint system: R1 + R2 = 5 - 3x1 - 3x2 - 4x3, then F(x) = -x1 + 2x2 - x3 - M(5 - 3x1 - 3x2 - 4x3) .

When compiling the first simplex table (Table 1), we will assume that the initial variables x1, x2, x3 are non-basic, and the introduced artificial variables are basic. In maximization problems, the sign of the coefficients for nonbasic variables in the F- and M-rows is reversed. The sign of the constant value in the M-line does not change. Optimization is carried out first along the M-line. The choice of the leading column and row, all simplex transformations when using the artificial basis method are carried out as in the usual simplex method. The maximum negative coefficient (-4) in absolute value determines the leading column and the variable x3, which will go into the basis. The minimum simplex ratio (2/3) corresponds to the second row of the table, therefore, the variable R2 must be excluded from the basis. The leading element is outlined.
In the artificial basis method, artificial variables excluded from the basis are no longer returned to it, so the columns of elements of such variables are omitted. Tab. 2. reduced by 1 column. Carrying out the recalculation of this table, go to the table. 3., in which the line M is set to zero, it can be removed. After excluding all artificial variables from the basis, we obtain an acceptable basic solution of the original problem, which in the example under consideration is optimal:

x1=0; x2=7/9; Fmax=8/9.

If the solution is not optimal when eliminating the M-row, then the optimization procedure continues and is performed by the usual simplex method. Consider an example in which there are constraints of all types: ≤,=,≥

Dual symmetric problems of linear programming. Example.

Dual problem definition

Each problem of linear programming can be associated in a certain way with some other problem (of linear programming), called dual or conjugate with respect to the original or direct problem. Let us give a definition of the dual problem with respect to the general problem of linear programming, which, as we already know, consists in finding the maximum value of a function under the conditions

is called dual with respect to problem (32) – (34). Problems (32) - (34) and (35) - (37) form a pair of problems, called in linear programming a dual pair. Comparing the two formulated problems, we see that the dual problem is composed according to the following rules:

1. The objective function of the original problem (32) - (34) is set to the maximum, and the objective function of the dual (35) - (37) is set to the minimum.

2. Matrix composed of coefficients for unknown constraints in the system (33) of the original problem (32) – (34), and a similar matrix in the dual problem (35) - (37) are obtained from each other by transposition (i.e., replacing rows with columns and columns with rows).

3. The number of variables in the dual problem (35) - (37) is equal to the number of constraints in the system (33) of the original problem (32) - (34), and the number of constraints in the system (36) of the dual problem is equal to the number of variables in the original problem.

4. The coefficients of the unknowns in the objective function (35) of the dual problem (35) - (37) are free terms in the system (33) of the original problem (32) - (34), and the right parts in the relations of the system (36) of the dual problem are coefficients for unknowns in the objective function (32) of the original problem.

5. If the variable xj of the original problem (32) - (34) can take only positive values, then the jth condition in the system (36) of the dual problem (35) - (37) is an inequality of the form “? ". If the variable xj can take both positive and negative values, then 1 - the ratio in system (54) is an equation. Similar connections take place between the constraints (33) of the original problem (32) – (34) and the variables of the dual problem (35) – (37). If i - the ratio in the system (33) of the original problem is an inequality, then the i-th variable of the dual problem . Otherwise, the variable yj can take both positive and negative values.

Dual pairs of problems are usually subdivided into symmetrical and asymmetric. In a symmetric pair of dual problems, constraints (33) of the primal problem and relations (36) of the dual problem are inequalities of the form “ ”. Thus, the variables of both problems can only take non-negative values.

Relation between variables of direct and dual problem. Example.

30.Economic interpretation of dual tasks. The value of zero estimates in solving an economic problem. Examples.

The original task I had a specific economic meaning: the main variables xi denoted the amount of products of the i-th type, additional variables denoted the amount of surplus of the corresponding type of resource, each of the inequalities expressed the consumption of a certain type of raw material in comparison with the stock of this raw material. The target function determined the profit in the sale of all products. Suppose now that the company has the ability to sell raw materials to the side. What minimum price should be set for a unit of each type of raw material, provided that the income from the sale of all its reserves is not less than the income from the sale of products that can be produced from this raw material.

Variables y1, y2, y3 will denote the conditional estimated price for resource 1, 2, 3, respectively. Then the income from the sale of raw materials used for the production of one unit of output I is equal to: 5y1 + 1 · y3. Since the price of products of type I is equal to 3 units, then 5y1 + y3 3, due to the fact that the interests of the enterprise require that the income from the sale of raw materials be no less than from the sale of products. It is precisely because of this economic interpretation that the system of constraints of the dual problem takes the form: And the objective function G = 400y1 + 300y2 + 100y3 calculates the conditional total cost of all available raw materials. It is clear that, by virtue of the duality theorem I, F(x*) = G(y*) equality means that the maximum profit from the sale of all finished products coincides with the minimum conditional price of resources. Conditional optimal prices yi show the lowest cost of resources at which it is profitable to turn these resources into products, to produce.

Let us once again pay attention to the fact that yi are only conditional, assumed, and not real prices for raw materials. Otherwise, it may seem strange to the reader that, for example, y1* = 0. This fact does not mean at all that the real price of the first resource is zero, there is nothing free in this world. Equality to zero of the conditional price only means that this resource has not been fully spent, it is in surplus, not in short supply. Indeed, let's look at the first inequality in the system of constraints of problem I, in which the consumption of the first resource is calculated: 5x1* + 0.4x2* + 2x3* + 0.5x4* = 66< 400. его избыток составляет х5 = 334 ед. при данном оптимальном плане производства. Этот ресурс имеется в избытке, и поэтому для производителя он недефицитен, его условная цена равна 0, его не надо закупать. Наоборот, ресурс 2 и 3 используются полностью, причем у3 = 4 а у2 = 1, т. е. сырье третьего вида более дефицитно, чем второго, его условная цена больше. Если производитель продукции имел бы возможность приобретать дополнительно сырье к уже имеющемуся, с целью получения максимального дохода от производства, то увеличив сырье второго вида на единицу, он бы получил дополнительно доход в у2 денежных единиц, с увеличением на единицу сырья третьего вида, значение целевой функции увеличилось бы еще на у3 единицы.

If the manufacturer is faced with the question, "is it profitable to produce any product, provided that the cost per unit of production is 3, 1, 4 units of the 1st, 2nd, 3rd types of raw materials, respectively, and the profit from sales is 23 units", then By virtue of the economic interpretation of the problem, it is not difficult to answer this question, since the costs and conditional prices of resources are known. The costs are 3, 1, 4, and the prices y1* = 0, y2* = 1, y3* = 4. So, we can calculate the total notional cost of the resources needed to produce this new product: 3 0 + 1 1 + 4 4 = 17< 23. значит продукцию производить выгодно, т. к. прибыль от реализации превышает затраты на ресурсы, в противном случае ответ бы на этот вопрос был отрицательным.

31. Using the optimal plan and simplex table to determine the sensitivity intervals of the initial data.

32.Use of the optimal design and simplex table for sensitivity analysis of the objective function. Example.

Transport problem and its properties. Example.

In the economy, as well as in other areas of human activity or in nature, we constantly have to deal with events that cannot be accurately predicted. Thus, the volume of sales of goods depends on demand, which can vary significantly, and on a number of other factors that are almost impossible to take into account. Therefore, when organizing production and sales, one has to predict the outcome of such activities on the basis of either one's own previous experience, or similar experience of other people, or intuition, which is also largely based on experimental data.

In order to somehow evaluate the event under consideration, it is necessary to take into account or specially organize the conditions in which this event is recorded.

The implementation of certain conditions or actions to identify the event in question is called experience or experiment.

The event is called random if, as a result of the experiment, it may or may not occur.

The event is called reliable, if it necessarily appears as a result of this experience, and impossible if it cannot appear in this experience.

For example, snowfall in Moscow on November 30th is a random event. The daily sunrise can be considered a certain event. Snowfall at the equator can be seen as an impossible event.

One of the main problems in probability theory is the problem of determining a quantitative measure of the possibility of an event occurring.

Algebra of events

Events are called incompatible if they cannot be observed together in the same experience. Thus, the presence of two and three cars in one store for sale at the same time are two incompatible events.

sum events is an event consisting in the occurrence of at least one of these events

An example of a sum of events is the presence of at least one of two products in a store.

work events is called an event consisting in the simultaneous occurrence of all these events

An event consisting in the appearance of two goods at the same time in the store is a product of events: - the appearance of one product, - the appearance of another product.

Events form a complete group of events if at least one of them necessarily occurs in the experience.

Example. The port has two berths for ships. Three events can be considered: - the absence of vessels at the berths, - the presence of one vessel at one of the berths, - the presence of two vessels at two berths. These three events form a complete group of events.

Opposite two unique possible events that form a complete group are called.

If one of the events that are opposite is denoted by , then the opposite event is usually denoted by .

Classical and statistical definitions of the probability of an event

Each of the equally possible test results (experiments) is called an elementary outcome. They are usually denoted by letters . For example, a dice is thrown. There can be six elementary outcomes according to the number of points on the sides.

From elementary outcomes, you can compose a more complex event. So, the event of an even number of points is determined by three outcomes: 2, 4, 6.

A quantitative measure of the possibility of occurrence of the event under consideration is the probability.

Two definitions of the probability of an event are most widely used: classical and statistical.

The classical definition of probability is related to the notion of a favorable outcome.

Exodus is called favorable this event, if its occurrence entails the occurrence of this event.

In the given example, the event under consideration is an even number of points on the dropped edge, has three favorable outcomes. In this case, the general
the number of possible outcomes. So, here you can use the classical definition of the probability of an event.

Classic definition equals the ratio of the number of favorable outcomes to the total number of possible outcomes

where is the probability of the event , is the number of favorable outcomes for the event, is the total number of possible outcomes.

In the considered example

The statistical definition of probability is associated with the concept of the relative frequency of occurrence of an event in experiments.

The relative frequency of occurrence of an event is calculated by the formula

where is the number of occurrence of an event in a series of experiments (tests).

Statistical definition. The probability of an event is the number relative to which the relative frequency is stabilized (established) with an unlimited increase in the number of experiments.

In practical problems, the relative frequency for a sufficiently large number of trials is taken as the probability of an event.

From these definitions of the probability of an event, it can be seen that the inequality always holds

To determine the probability of an event based on formula (1.1), combinatorics formulas are often used to find the number of favorable outcomes and the total number of possible outcomes.

Probability theory - a mathematical science that studies the patterns of random phenomena. Random phenomena are understood as phenomena with an uncertain outcome that occur when a certain set of conditions is repeatedly reproduced.

For example, when you toss a coin, you cannot predict which side it will fall on. The result of tossing a coin is random. But with a sufficiently large number of coin tosses, there is a certain pattern (the coat of arms and the lattice will fall out approximately the same number of times).

Basic concepts of probability theory

test (experiment, experiment) - the implementation of a certain set of conditions in which this or that phenomenon is observed, this or that result is fixed.

For example: tossing a dice with a loss of points; air temperature difference; method of treating the disease; some period of a person's life.

Random event (or just an event) - outcome of the test.

Examples of random events:

    dropping one point when throwing a dice;

    exacerbation of coronary heart disease with a sharp increase in air temperature in summer;

    the development of complications of the disease with the wrong choice of treatment method;

    admission to a university with successful study at school.

Events are indicated in capital letters of the Latin alphabet: A , B , C ,

The event is called reliable if as a result of the test it must necessarily occur.

The event is called impossible if, as a result of the test, it cannot occur at all.

For example, if all products in a batch are standard, then the extraction of a standard product from it is a reliable event, and the extraction of a defective product under the same conditions is an impossible event.

CLASSICAL DEFINITION OF PROBABILITY

Probability is one of the basic concepts of probability theory.

The classical probability of an event is the ratio of the number of cases favorable to the event , to the total number of cases, i.e.

, (5.1)

where
- event probability ,

- number of favorable events ,

is the total number of cases.

Event Probability Properties

    The probability of any event lies between zero and one, i.e.

    The probability of a certain event is equal to one, i.e.

.

    The probability of an impossible event is zero, i.e.

.

(Offer to solve a few simple problems orally).

STATISTICAL DEFINITION OF PROBABILITY

In practice, often when evaluating the probabilities of events, they are based on how often a given event will occur in the tests performed. In this case, the statistical definition of probability is used.

Statistical probability of an event is called the limit of relative frequency (the ratio of the number of cases m, favorable to the occurrence of the event , to the total number performed tests), when the number of tests tends to infinity, i.e.

where
- statistical probability of an event ,
- number of trials in which the event appeared , - total number of trials.

Unlike classical probability, statistical probability is a characteristic of an experimental one. Classical probability is used to theoretically calculate the probability of an event under given conditions and does not require that tests be carried out in reality. The statistical probability formula is used to experimentally determine the probability of an event, i.e. it is assumed that the tests were actually carried out.

The statistical probability is approximately equal to the relative frequency of a random event, therefore, in practice, the relative frequency is taken as the statistical probability, since statistical probability is almost impossible to find.

The statistical definition of probability applies to random events that have the following properties:

Theorems of addition and multiplication of probabilities

Basic concepts

a) The only possible events

Events
are called the only possible ones if, as a result of each test, at least one of them will surely occur.

These events form a complete group of events.

For example, when rolling a dice, the only possible events are the face rolls with one, two, three, four, five, and six points. They form a complete group of events.

b) Events are called incompatible if the occurrence of one of them excludes the occurrence of other events in the same trial. Otherwise, they are called joint.

c) Opposite name two uniquely possible events that form a complete group. designate and .

G) Events are called independent, if the probability of occurrence of one of them does not depend on the commission or non-completion of others.

Actions on events

The sum of several events is an event consisting in the occurrence of at least one of these events.

If and are joint events, then their sum
or
denotes the occurrence of either event A, or event B, or both events together.

If and are incompatible events, then their sum
means occurrence or event , or events .

Amount events are:

The product (intersection) of several events is an event consisting in the joint occurrence of all these events.

The product of two events is
or
.

Work events denote

The addition theorem for the probabilities of incompatible events

The probability of the sum of two or more incompatible events is equal to the sum of the probabilities of these events:

For two events;

- for events.

Consequences:

a) The sum of the probabilities of opposite events and is equal to one:

The probability of the opposite event is denoted :
.

b) Sum of probabilities events that form a complete group of events is equal to one: or
.

Addition theorem for joint event probabilities

The probability of the sum of two joint events is equal to the sum of the probabilities of these events without the probabilities of their intersection, i.e.

Probability multiplication theorem

a) For two independent events:

b) For two dependent events

where
is the conditional probability of the event , i.e. event probability , calculated under the condition that the event happened.

c) For independent events:

.

d) The probability of the occurrence of at least one of the events , forming a complete group of independent events:

Conditional Probability

Event Probability , calculated assuming that an event has occurred , is called the conditional probability of the event and denoted
or
.

When calculating the conditional probability using the classical probability formula, the number of outcomes and
is calculated taking into account the fact that before the event an event happened .