Polynomial of the best root-mean-square approximation. Coursework numerical methods for solving typical mathematical problems

By clicking on the "Download archive" button, you will download the file you need for free.
Before downloading this file, remember those good essays, control, term papers, theses, articles and other documents that lie unclaimed on your computer. This is your work, it should participate in the development of society and benefit people. Find these works and send them to the knowledge base.
We and all students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

To download an archive with a document, enter a five-digit number in the field below and click the "Download archive" button

Similar Documents

    Solution of linear systems algebraic equations simple iteration method. Polynomial interpolation of a function by Newton's method with divided differences. Root-mean-square approximation of a function. Numerical integration of functions by the Gauss method.

    term paper, added 04/14/2009

    Numerical methods are a set of algorithms that allow obtaining an approximate (numerical) solution math problems. Two types of errors that arise in solving problems. Finding zeros of a function. half division method. chord method.

    course of lectures, added 03/06/2009

    The concept of a definite integral, its geometric meaning. Numerical methods for calculating definite integrals. Formulas for rectangles and trapezoids. Application of the Mathcad package for calculating integrals, checking the results of calculations using Mathcad.

    term paper, added 03/11/2013

    Numerical methods for solving systems linear equations: Gaussian, simple iteration, Seidel. Methods of approximation and interpolation of functions: uncertain coefficients, least squares. Solutions of nonlinear equations and calculation of definite integrals.

    term paper, added 04/27/2011

    Methods for estimating the error of interpolation. Interpolation by algebraic polynomials. Construction of algebraic polynomials of the best mean square approximation. Numerical methods for solving the Cauchy problem for ordinary differential equations.

    laboratory work, added 08/14/2010

    Solution of nonlinear equations by the tangent method (Newton), features and stages of this process. Function interpolation mechanism and numerical integration. Approximate solution of ordinary differential equations of the first order by the Euler method.

    term paper, added 12/16/2015

    Numerical methods for finding an unconditional extremum. Problems of unconditional minimization. Calculation of the minimum of a function by the method of coordinate descent. Solution of linear programming problems by graphical and simplex methods. Working with the MathCAD program.

    term paper, added 04/30/2011

Often the values ​​of the interpolated function u, u2 , ..., yn are determined from the experiment with some errors, so it is unreasonable to use the exact approximation at the interpolation nodes. In this case, it is more natural to approximate the function not by points, but by average, i.e., in one of the L p norms.

Space 1 p - set of functions d(x), defined on the segment [a,b] and modulo integrable with p-th degree, if the norm is defined

Convergence in such a norm is called convergence in average. The space 1,2 is called the Hilbert space, and the convergence in it is rms.

Let the function Ax) and the set of functions φ(x) from some linear normed space be given. In the context of the problem of interpolation, approximation, and approximation, the following two problems can be formulated.

First task is an approximation with a given accuracy, i.e., according to a given e find a φ(x) such that the inequality |[Ax) - φ(x)|| G..

Second task is a search the best approximation i.e., the search for a function φ*(x) that satisfies the relation:

Let us define without proof a sufficient condition for the existence of the best approximation. To do this, in the linear space of functions, we choose a set parametrized by the expression

where the set of functions φ[(x), ..., φn(x) will be assumed to be linearly independent.

It can be shown that in any normed space with linear approximation (2.16) the best approximation exists, although it is unique in any linear space.

Let us consider the Hilbert space LzCp) of real square-integrable functions with weight p(x) > 0 on [ , where the scalar product ( g,h) determined by

formula:

Substituting the linear combination (2.16) into the best approximation condition, we find

Equating to zero the derivatives with respect to the coefficients (D, k= 1, ..., П, we obtain a system of linear equations

The determinant of the system of equations (2.17) is called the Gram determinant. Gram's determinant is nonzero, since it is assumed that the system of functions φ[(x), ..., φn(x) is linearly independent.

Thus, the best approximation exists and is unique. To obtain it, it is necessary to solve the system of equations (2.17). If the system of functions φ1(x), ..., φn(x) is orthogonalized, i.e., (φ/, φ,) = sy, where SCH,ij = 1, ..., P, then the system of equations can be solved in the form:

The coefficients found according to (2.18) Q, ..., th p are called the coefficients of the generalized Fourier series.

If a set of functions φ t (X), ..., φ "(x), ... forms a complete system, then by virtue of Parseval's equality for Π -» with the norm of error decreases indefinitely. This means that the best approximation converges rms to Dx) with any given accuracy.

We note that the search for the coefficients of the best approximation by solving the system of equations (2.17) is practically unrealizable, since as the order of the Gram matrix increases, its determinant rapidly tends to zero, and the matrix becomes ill-conditioned. Solving a system of linear equations with such a matrix will lead to a significant loss of accuracy. Let's check it out.

Let as a system of functions φ„ i =1, ..., П, degrees are chosen, i.e. φ* = X 1 ", 1 = 1, ..., P, then, assuming the segment as an approximation segment, we find the Gram matrix

The Gram matrix of the form (2.19) is also called the Hilbert matrix. This is a classic example of a so-called ill-conditioned matrix.

Using MATLAB, we calculate the determinant of the Hilbert matrix in the form (2.19) for some first values P. Listing 2.5 shows the code for the corresponding program.

Listing 23

% Calculate the determinant of Hilbert matrices % clear the workspace clear all;

%choose the maximum value of the order % of the Hilbert matrix ptah =6;

%build a loop to generate %Hilbert matrices and calculate their determinants

for n = 1: nmax d(n)=det(hi I b(n)); end

%display the values ​​of the determinants %of the Hilbert matrices

f o g ta t short end

After working out the code in Listing 2.5, the Hilbert matrix determinant values ​​for the first six matrices should appear in the MATLAB command window. The table below shows the corresponding numerical values ​​of the matrix orders (n) and their determinants (d). The table clearly shows how quickly the determinant of the Hilbert matrix tends to zero as the order increases and, starting from orders 5 and 6, becomes unacceptably small.

Table of values ​​of the determinant of Hilbert matrices

Numerical orthogonalization of the system of functions φ, i = 1, ..., П also leads to a noticeable loss of accuracy, therefore, in order to take into account big number terms in expansion (2.16), it is necessary either to carry out orthogonalization analytically, i.e., exactly, or to use a ready-made system of orthogonal functions.

If during interpolation, degrees are usually used as a system of basis functions, then during approximation, on average, polynomials that are orthogonal with a given weight are chosen as basis functions. The most common of these are the Jacobi polynomials, a special case of which are the Legendre and Chebyshev polynomials. Lagsrr and Hermite polynomials are also used. More details about these polynomials can be found, for example, in the appendix Orthogonal polynomials books.

3. RMS approximation of the function

3.1 Statement of the problem

Develop an algorithm scheme and write a program in Turbo Pascal 7.0 to perform root-mean-square approximation of a function given in nodes.

3.2 Mathematical formulation of the problem

Let there be a set of functions belonging to linear space functions. By closeness on average of the interpolated and interpolating functions, we mean the result of estimating the integral

, (3.1)

where is the weight function.

This approximation is called root mean square.

3.3 Review of existing numerical methods for solving the problem

The root-mean-square approximation problem arises in many areas of applied research, for example, in statistical processing of experimental data using regression analysis, in estimating model parameters, in filtering problems, etc.

When the level of uncertainty in setting the approximated function f(x i), i=1..m, is large enough, which is typical for the processing of experimental data, it makes no sense to demand that the interpolation conditions be met; moreover, the number of points for specifying the function f(x i) is often quite large. All this makes the use of interpolation unpromising due to poor conditionality of the problem. high dimension and problems of convergence of the interpolation process

One of the simplest and therefore widely used approximating functions is the algebraic polynomial

The root-mean-square approximation method provides the construction of the polynomial Pn(x) based on the minimization of the value

The considered approximation method minimizes the root-mean-square deviation of the approximating polynomial from the function being approximated, but does not guarantee against significant local errors. To prevent this possibility, polynomials of the best uniform approximation are used.

in the space of parameters a 0 , a 1 ,...,a n. There are various approaches to solving the problem of minimizing the function D(a). The simplest of them leads to the need to solve normal system linear algebraic equations

However, even for n > 5, the matrix of such a system turns out to be so ill-conditioned that the values ​​of a j obtained from (3.4) turn out to be of little use for calculating P n (x). Therefore, if it is necessary to construct polynomials of the best mean-square approximation, more high degrees other algorithms are used, for example, the singular value decomposition method.

3.4 Numerical method for solving the problem

Two issues can be considered:

1 - choose a function so that the inequality is fulfilled

2 - find the best approximation, i.e. such a function that the relation

. (3.6)

We expand the function in terms of a system of linearly independent functions:

. (3.7)

In the future, to shorten the notation, we will use the definition of the scalar product in the space of functions :

.

Substituting (3.7) into condition (3.6), we obtain

Differentiating this expression with respect to and equating the derivatives to zero, we obtain

. (3.8)

The determinant of this system is the Gram determinant of functions. By virtue of their linear independence this determinant is not equal to zero. Therefore, from system (3.8) one can find the coefficients defining the function according to (3.6) and minimizing the integral of the error . Thus, the best root-mean-square approximation exists and it is unique.

When using an orthonormal system of functions, system (3.8) simplifies:

,

those. are the Fourier coefficients, and the best approximation is the Fourier series terminated at some term.

It is proved that in any linearly normed space under a linear approximation of the form (3.4) the best approximation exists, although it may not be unique.

In cases where the functions are not orthogonal, at , the Gram determinant decreases, approaching zero. Then the system becomes ill-conditioned and its solution gives a large error. In this situation, usually no more than five or six terms are taken in the sum (3.7).

The most commonly used polynomials are Legendre, Chebyshev, Laguerre, Hermite, orthogonal with a given weight.

Consider special case when it is necessary to find the best approximation of a function given in a table. For real functions defined on a finite set of points, the scalar product is defined by the formula

, (3.9)

where is the number of specified nodes.

The condition for the best root-mean-square approximation is written as follows:

. (3.10)

Assuming , where , and substituting this polynomial into (3.10), we arrive at system (3.8), in which the scalar products are calculated according to (3.9). The described approximation procedure is called the method of least squares.

The most common variant of the least squares method corresponds to the case of a power type of functions, i.e. , and .

The system of equations (3.8) then takes the form

, , (3.11)

Form more high level abstractions and generalizations than that which traditional teaching has been oriented towards." Consequently, traditional forms of learning are unable to raise mathematical thinking. junior schoolchildren to a higher level. How does non-traditional education solve this problem? What properties of mathematical thinking does the solution develop non-standard tasks? In-...

network built on the basis of various topologies. Software application systems designed for professional activity manager, includes: · system software; basic packages of applied programs; · means of network support of computers in local and global networks; application programming systems; test software. ...

LABORATORY WORK

RMS APPROXIMATION OF TABLED FUNCTIONS BY THE Least SQUARE METHOD

Target: Familiarization of students with the basic methods of interpolation and approximation of tabular given functions. Consolidation in practice of the acquired knowledge in the field of approximation of such functions.

Task: Teach students practical application obtained theoretical knowledge in solving problems of smoothing the results of the experiment by polynomials, both in the algorithmization of such problems, and in their programming.

THEORETICAL PROVISIONS

Interpolation and Approximation

In practice, a situation often occurs when a certain function f(x) is given by a table of its values ​​at individual points X = x 0 , x 1 , … , x n [a, b], for example, discrete readings of the device in time, and you should calculate the function f(x) at some intermediate points. This problem can be solved approximately by replacing the function f(x) is a simpler continuous function F(x). There are two main ways to do this: interpolation and approximation.

essence interpolation– in the construction of such an easily calculated function F(x), which coincides with the function f(x) at points X = x 0 , x 1 , … , x n. In other words, the graph of the function F(x) in the plane Ohu must pass through the points X = x 0 , x 1 , … , x n, in which the function is set f(x). At the same time, points X = x 0 , x 1 , … , x n are called interpolation nodes, and the function F(x) - interpolation. In most cases, polynomials are chosen as an interpolation function. Thus, linear interpolation is as simple as serial connection points ( x 0 , f(x 0)), (x 1 , f(x 1)), … ,

(x n, f(x n)) by straight line segments, i.e. in construction n polynomials of the first degree. Function value f(x) at the point X*, where X* (x i,x i +1), i = 0, 1, … , n– 1, is calculated in this case quite simply:

f(x*) = f(x i) + · ( X*–x i).

Quadratic interpolation consists in connecting successive triplets of interpolation nodes with parabolas. Cubic interpolation - quadruples - cubic parabolas, etc. Interpolation polynomials of degree ( n– 1) there are smooth functions passing through all interpolation nodes. When imposing additional conditions on the connection of the function F(x)at points ( x 1 , f(x 1)), (x 2 , f(x 2)), … , (x n -1 , f(x n-1)) we get the so-called. spline interpolation. To construct interpolation polynomials, many methods have been developed: Newton, Stirling, Lagrange, etc.

In many cases, having function values ​​in n+ 1 nodes, instead of an interpolation polynomial, it is convenient to find a polynomial of degree m<n, which would approximate (approximate) the considered function well. At the same time, the requirement for the coincidence of functions f(x) and F(x) at points ( x 0 , f(x 0)), (x 1 , f(x 1)), … , (x n, f(x n)) is replaced by the requirement to minimize the total deviation between the values ​​of the functions f(x) and F(x) at points X = x 0 , x 1 , … , x n.

One of the main methods of constructing approximation polynomial is the method of least squares, which requires that the sum of the squared deviations between the values ​​of the function and the values ​​of the approximating function at the nodes should be minimal. Why squares? Because the deviations between the values ​​of the functions themselves can be both positive and negative, and their sum does not give a true idea of ​​the difference between the functions by compensating for positive and negative values. You can take the deviation modules, but the positive squares of these deviations are more convenient to work with.

Root-mean-square approximation of tabularly defined functions

(least square method)

Let in knots x 0 , x 1 , … , x n we have values at 0 , at 1 , … , at n functions f(x). Among polynomials m th degree ( m<n)

Pm(x) = a 0 + a 1 x + a 2 x 2 + … + a m x m(1)

find the one that delivers the minimum to the expression

S= .(2)

The unknowns are the coefficients of the polynomial (1). The sum (2) is a quadratic form of these coefficients. In addition, formula (2) shows that the function S = S(a 0 , a 1 , … , a m) cannot take negative values. Therefore, the minimum of the function S exists.

Applying the necessary conditions for the extremum of the function S = S(a 0 , a 1 , … , a m), we obtain a system of linear algebraic equations for determining the coefficients a 0 , a 1 , … , a m:

, (k = 0, 1, 2, … , m)(3)

Assuming with p = , dp = , we write system (3) in matrix form

WITHa = d , (4)

WITH = is the matrix of the system, a = {a 0 , a 1 , … , a m} T is the vector of unknowns, d = {d 0 , d 1 , … , d m} T is the vector of the right parts of the system.

If among the nodes x 0 , x 1 , … , x n no matching and mn, then system (4) has a unique solution a 0 = ,a 1 = , … , a m= . Then the polynomial

= + x + x 2 + … + x m

is the only polynomial of degree m, which has a minimum square deviation S* = S min.

The error of the root-mean-square approximation of a function is characterized by the value δ = .

The simplest and most commonly used type of approximation (root-mean-square approximation) of a function is linear. Approximation data ( x i, y i) is carried out by a linear function y(X)= ax+b. On the coordinate plane ( x, y) a linear function is known to be represented by a straight line.

Example. Smooth a system of points straight y= ax+b.

X –1 0 1 2 3 4
at 0 2 3 3,5 3 4,5

Building a worksheet:

worksheet: no. x i y i x i 2 x i y i ax i + b ax i + by i (ax i + by i) 2
1 –1 0 1 0 0,81 0,81 0,6561
2 0 2 0 0 1,55 –0,45 0,2025
3 1 3 1 3 2,29 –0,71 0,5041
4 2 3,5 4 7 3,03 –0,47 0,2209
5 3 3 9 9 3,77 0,77 0,5929
6 4 4,5 16 18 4,51 0,01 0,001
9 16 31 37

The system for determining a and b is:Let's solve it with

Cramer's formulas:

Δ = = 105, Δ 1 = = 78, Δ2 = = 163,

a = = = 0,74, b = = = 1,55.

The desired equation y= 0,74x + 1,55.

In order to smooth the discrete functions of Altman, and thereby introduce the idea of ​​continuity into the theory, the root-mean-square integral approximation by a polynomial of different degrees was used.

It is known that a sequence of interpolation polynomials over equidistant nodes does not necessarily converge to a function, even if the function is infinitely differentiable. For the approximated function, with the help of a suitable arrangement of nodes, it is possible to reduce the degree of the polynomial. . The structure of the Altman functions is such that it is more convenient to use the approximation of the function not by means of interpolation, but by constructing the best root-mean-square approximation in a normalized linear space. Consider the basic concepts and information in constructing the best approximation. Approximation and optimization problems are posed in linear normed spaces.

Metric and linear normed spaces

The broadest concepts of mathematics include "set" and "mapping". The concept of "set", "set", "collection", "family", "system", "class" in non-strict set theory are considered synonyms.

The term "operator" is identical to the term "mapping". The terms "operation", "function", "functional", "measure" are special cases of the concept "mapping".

The terms "structure", "space" in the axiomatic construction of mathematical theories have also now acquired fundamental significance. Mathematical structures include set-theoretic structures (ordered and partially ordered sets); abstract algebraic structures (semigroups, groups, rings, division rings, fields, algebras, lattices); differential structures (outer differential forms, fiber spaces) , , , , , , .

A structure is understood as a finite set consisting of sets of a carrier (main set), a numerical field (auxiliary set), and a mapping defined on the elements of the carrier and numbers of the field. If the set of complex numbers is taken as the carrier, then it plays the role of both the main and auxiliary sets. The term "structure" is identical to the concept of "space".

To define a space, it is first of all necessary to define a carrier set with its elements (points), denoted by Latin and Greek letters

Sets of real (or complex) elements can act as a carrier: numbers; vectors, ; Matrices, ; Sequences, ; Functions

Sets can also act as carrier elements: real axis, plane, three-dimensional (and multidimensional) space, permutations, movements; abstract sets.

Definition. A metric space is a structure that forms a triple, where the mapping is a non-negative real function of two arguments for any x and y from M and satisfies three axioms.

  • 1 - non-negativity; , at.
  • 2- - symmetry;
  • 3- - axiom of reflexivity.

where are the distances between the elements.

In a metric space, a metric is specified and the concept of the proximity of two elements from the support set is formed.

Definition. A real linear (vector) space is a structure where mapping is the additive operation of adding elements belonging to it, and mapping is the operation of multiplying a number by an element from.

The operation means that for any two elements, the third element is uniquely defined, called their sum and denoted by, and the following axioms hold.

commutative property.

Associative property.

There is a special element in B, denoted by such that it holds for any.

for any exists, such that.

The element is called opposite to and is denoted by.

The operation means that for any element and any number, an element is defined, denoted by and the axioms are satisfied:

An element (points) of a linear space is also called a vector. Axioms 1 - 4 define a group (additive), called a module and representing a structure.

If an operation in a structure does not obey any axioms, then such a structure is called a groupoid. This structure is extremely poor; it does not contain any axiom of associativity, then the structure is called a monoid (semigroup).

In the structure, with the help of mapping and axioms 1-8, the property of linearity is set.

So, the linear space is a group module, in the structure of which one more operation is added - multiplication of the support elements by a number with 4 axioms. If instead of the operation, along with one more group operation of multiplication of elements with 4 axioms, and postulate the axiom of distributivity, then a structure called a field arises.

Definition. A linear normed space is a structure in which the mapping satisfies the following axioms:

  • 1. And then and only then, when.
  • 2. , .
  • 3. , .

And so in just 11 axioms.

For example, if we add a module that has all three properties of the norm to the structure of the field of real numbers, where are real numbers, then the field of real numbers becomes a normed space

There are two common ways to introduce the norm: either by explicitly specifying the interval form of the homogeneously convex functional , , or by specifying the scalar product , .

Let, then the form of the functional can be specified in an infinite number of ways by changing the value:

  • 1. , .
  • 2. , .

………………..

…………….

The second common way of accepting the assignment is that another mapping is introduced into the space structure (a function of two arguments, usually denoted by and called the scalar product).

Definition. Euclidean space is a structure in which the scalar product contains the norm and satisfies the axioms:

  • 4. , and if and only if

In Euclidean space, the norm is generated by the formula

It follows from properties 1 - 4 of the scalar product that all axioms of the norm are satisfied. If the scalar product is in the form, then the norm will be calculated by the formula

The space norm cannot be specified using the scalar product , .

In spaces with a scalar product, such qualities appear that are absent in linear normed spaces (orthogonality of elements, parallelogram equality, Pythagorean theorem, Apollonius's identity, Ptolemy's inequality. The introduction of a scalar product provides ways to more efficiently solve approximation problems.

Definition. An infinite sequence of elements in a linear normed space is said to be norm-converging (simply convergent or having a limit in) if there exists such an element that for any there is a number depending on such that for

Definition. A sequence of elements in is called fundamental if for any there is a number depending on that any and are satisfied (Trenogin Kolmogorov, Kantorovich, p. 48)

Definition. A Banach space is a structure in which any fundamental sequence converges in norm.

Definition. A Hilbert space is a structure in which any fundamental sequence converges in the norm generated by the scalar product.