local maximum. Determination of points of local extrema of a function of several variables

LOCAL MAXIMUM

LOCAL MAXIMUM

(local maximum) The value of a function that is greater than any adjacent value of its argument or set of arguments, dy/dx= 0 is a necessary condition for reaching a local maximum y=f(x); under this condition, a sufficient condition for achieving a local maximum is d2y/dx2 0. A local maximum can also be an absolute maximum if there is no value X, under which at more. However, this may not always be the case. Consider the function y = x3–3x.dy/dx = 0 when x2= one; and d2y/dx2=6x. at has a maximum at x = - 1, but this is only a local, not an absolute maximum, since at can become infinitely large when given a large enough positive value X. See also: figure for the maximum article.


Economy. Dictionary. - M.: "INFRA-M", Publishing house "Ves Mir". J. Black. General editorial staff: Doctor of Economics Osadchaya I.M.. 2000 .


Economic dictionary. 2000 .

See what "LOCAL MAXIMUM" is in other dictionaries:

    local maximum- - [A.S. Goldberg. English Russian Energy Dictionary. 2006] Topics energy in general EN local maximum ... Technical Translator's Handbook

    local maximum- lokalusis maksimumas statusas T sritis automatika atitikmenys: engl. local maximum vok. Lokalmaximum, n rus. local maximum, m pranc. maximum local, m … Automatikos terminų žodynas

    local maximum- vietinė smailė statusas T sritis fizika atitikmenys: angl. local maximum; local peak vok. locales Maximum, n rus. local maximum, m pranc. maximum local, m; pic local, m … Fizikos terminų žodynas

    Local maximum, local minimum- (local maximum, local minimum) see Function extremum... Economic and Mathematical Dictionary

    - (maximum) The highest value of the function that it takes for any value of its arguments. The maximum can be local or absolute. For example, the function y=1–x2 has an absolute maximum y=1 at x=0; there is no other value of x that ... ... Economic dictionary

    - (local minimum) The value of the function, which is less than any neighboring value of its argument or set of arguments, dy/dx = 0, is a necessary condition for achieving a local minimum y=f(x); subject to this condition, sufficient ... ... Economic dictionary

    Extremum (Latin extremum extreme) in mathematics is the maximum or minimum value of a function on a given set. The point at which the extremum is reached is called the extremum point. Accordingly, if the minimum extremum point is reached ... ... Wikipedia

    Local search algorithms are a group of algorithms in which the search is carried out only on the basis of the current state, and previously passed states are not taken into account and are not remembered. The main goal of the search is not to find the optimal path to ... ... Wikipedia

    - (global maximum) The value of the function, equal to or higher than its values ​​taken for any other argument values. A sufficient condition for the maximum of a function of one argument, which consists in the fact that its first derivative in ... ... Economic dictionary

    - (eng. trend direction, trend) direction, trend of development of the political process, phenomenon. Has a mathematical expression. The most popular definition of trend (trend) is the definition from Dow theory. Uptrend... ... Political science. Dictionary.

The function increments to the argument increment, which tends to zero. To find it, use the table of derivatives. For example, the derivative of the function y = x3 will be equal to y’ = x2.

Equate this derivative to zero (in this case x2=0).

Find the value of the given variable. These will be the values ​​when this derivative will be equal to 0. To do this, substitute arbitrary numbers in the expression instead of x, at which the entire expression will become zero. For example:

2-2x2=0
(1-x)(1+x) = 0
x1=1, x2=-1

Apply the obtained values ​​​​on the coordinate line and calculate the sign of the derivative for each of the obtained ones. Points are marked on the coordinate line, which are taken as the origin. To calculate the value in the intervals, substitute arbitrary values ​​that match the criteria. For example, for the previous function up to the interval -1, you can choose the value -2. For -1 to 1, you can choose 0, and for values ​​​​greater than 1, choose 2. Substitute these numbers in the derivative and find out the sign of the derivative. In this case, the derivative with x = -2 will be equal to -0.24, i.e. negative and there will be a minus sign on this interval. If x=0, then the value will be equal to 2, and a sign is put on this interval. If x=1, then the derivative will also be equal to -0.24 and a minus is put.

If, when passing through a point on the coordinate line, the derivative changes its sign from minus to plus, then this is a minimum point, and if from plus to minus, then this is a maximum point.

Related videos

Useful advice

To find the derivative, there are online services that calculate the required values ​​and display the result. On such sites, you can find a derivative of up to 5 orders.

Sources:

  • One of the services for calculating derivatives
  • maximum point of the function

The maximum points of the function along with the minimum points are called extremum points. At these points, the function changes its behavior. Extrema are determined on limited numerical intervals and are always local.

Instruction

The process of finding local extrema is called a function and is performed by analyzing the first and second derivatives of the function. Before starting the exploration, make sure that the specified range of argument values ​​belongs to the allowed values. For example, for the function F=1/x, the value of the argument x=0 is invalid. Or for the function Y=tg(x), the argument cannot have the value x=90°.

Make sure the Y function is differentiable over the entire given interval. Find the first derivative Y". It is obvious that before reaching the point of a local maximum, the function increases, and when passing through the maximum, the function becomes decreasing. The first derivative in its physical meaning characterizes the rate of change of the function. While the function is increasing, the rate of this process is a positive value. When passing through a local maximum, the function begins to decrease, and the rate of the process of change of the function becomes negative.The transition of the rate of change of the function through zero occurs at the point of the local maximum.

For example, the function Y \u003d -x² + x + 1 on the interval from -1 to 1 has a continuous derivative Y "\u003d -2x + 1. At x \u003d 1/2, the derivative is zero, and when passing through this point, the derivative changes sign from " +" to "-". The second derivative of the function Y "=-2. Build a point-by-point graph of the function Y=-x²+x+1 and check if the point with the abscissa x=1/2 is a local maximum on a given segment of the numerical axis.

The function is said to have an internal point
areas D local maximum(minimum) if there is such a neighborhood of the point
, for each point
which satisfies the inequality

If the function has at the point
local maximum or local minimum, then we say that it has at this point local extremum(or just extreme).

Theorem (a necessary condition for the existence of an extremum). If the differentiable function reaches an extremum at the point
, then each first-order partial derivative of the function vanishes at this point.

The points at which all first-order partial derivatives vanish are called stationary points of the function
. The coordinates of these points can be found by solving the system from equations

.

The necessary condition for the existence of an extremum in the case of a differentiable function can be briefly formulated as follows:

There are cases when at certain points some partial derivatives have infinite values ​​or do not exist (while the rest are equal to zero). Such points are called critical points of the function. These points should also be considered as "suspicious" for an extremum, as well as stationary ones.

In the case of a function of two variables, the necessary condition for an extremum, namely the equality to zero of the partial derivatives (differential) at the extremum point, has a geometric interpretation: tangent plane to surface
at the extremum point must be parallel to the plane
.

20. Sufficient conditions for the existence of an extremum

The fulfillment of the necessary condition for the existence of an extremum at some point does not at all guarantee the existence of an extremum there. As an example, we can take the everywhere differentiable function
. Both its partial derivatives and the function itself vanish at the point
. However, in any neighborhood of this point, there are both positive (large
) and negative (smaller
) values ​​of this function. Therefore, at this point, by definition, there is no extremum. Therefore, it is necessary to know sufficient conditions under which a point suspected of an extremum is an extremum point of the function under study.

Consider the case of a function of two variables. Let's assume that the function
is defined, continuous, and has continuous partial derivatives up to and including the second order in a neighborhood of some point
, which is the stationary point of the function
, that is, satisfies the conditions

,
.

Let us introduce the notation:

Theorem (sufficient conditions for the existence of an extremum). Let the function
satisfies the above conditions, namely: differentiable in some neighborhood of the stationary point
and is twice differentiable at the point itself
. Then if


If
then the function
at the point
reaches

local maximum at
and

local minimum at
.

In general, for a function
sufficient condition for existence at a point
localminimum(maximum) is positive(negative) the definiteness of the second differential.

In other words, the following statement is true.

Theorem . If at the point
for function

for any not equal to zero at the same time
, then at this point the function has minimum(similar maximum, if
).

Example 18.Find local extremum points of a function

Solution. Find the partial derivatives of the function and equate them to zero:

Solving this system, we find two possible extremum points:

Let's find second-order partial derivatives for this function:

At the first stationary point , therefore, and
Therefore, further research is required for this point. Function value
at this point is zero:
Further,

at

a

at

Therefore, in any neighborhood of the point
function
takes values ​​as large
, and smaller
, and hence at the point
function
, by definition, has no local extremum.

At the second stationary point



therefore, therefore, since
then at the point
the function has a local maximum.

For a function f(x) of many variables, the point x is a vector, f'(x) is the vector of first derivatives (gradient) of the function f(x), f ′ ′(x) is a symmetric matrix of second partial derivatives (Hesse matrix − Hessian) functions f(x).
For a function of several variables, the optimality conditions are formulated as follows.
A necessary condition for local optimality. Let f(x) be differentiable at the point x * R n . If x * is a local extremum point, then f'(x *) = 0.
As before, points that are solutions to a system of equations are called stationary. The nature of the stationary point x * is related to the sign-definiteness of the Hessian matrix f′ ′(x).
The sign-definiteness of the matrix A depends on the signs of the quadratic form Q(α)=< α A, α >for all nonzero α∈R n .
Here and further through the scalar product of the vectors x and y is denoted. By definition,

A matrix A is positively (non-negatively) definite if Q(α)>0 (Q(α)≥0) for all non-zero α∈R n ; negatively (nonpositively) definite if Q(α)<0 (Q(α)≤0) при всех ненулевых α∈R n ; неопределенной, если Q(α)>0 for some nonzero α∈R n and Q(α)<0 для остальных ненулевых α∈R n .
A sufficient condition for local optimality. Let f(x) be twice differentiable at the point x * R n , and f’(x *)=0 , i.e. x * − stationary point. Then, if the matrix f (x *) is positive (negative) definite, then x * is a local minimum (maximum) point; if the matrix f′′(x *) is indefinite, then x * is a saddle point.
If the matrix f′′(x *) is non-negative (non-positively) definite, then to determine the nature of the stationary point x *, the study of higher-order derivatives is required.
To check the sign-definiteness of a matrix, as a rule, the Sylvester criterion is used. According to this criterion, a symmetric matrix A is positive definite if and only if all of its angular minors are positive. In this case, the angular minor of the matrix A is the determinant of the matrix constructed from the elements of the matrix A, standing at the intersection of rows and columns with the same (and the first) numbers. To check the symmetric matrix A for negative definiteness, one must check the matrix (−A) for positive definiteness.
So, the algorithm for determining the points of local extrema of a function of many variables is as follows.
1. Find f′(x).
2. The system is solved

As a result, stationary points x i are calculated.
3. Find f′′(x), set i=1.
4. Find f′′(x i)
5. The angular minors of the matrix f′′(x i) are calculated. If not all angular minors are non-zero, then to determine the nature of the stationary point x i, the study of higher-order derivatives is required. In this case, the transition to item 8 is carried out.
Otherwise, go to step 6.
6. The signs of the angular minors f′′(x i) are analyzed. If f′′(x i) is positive definite, then x i is a local minimum point. In this case, the transition to item 8 is carried out.
Otherwise, go to item 7.
7. The angular minors of the matrix -f′′(x i) are calculated and their signs are analyzed.
If -f′′(x i) − is positive definite, then f′′(x i) is negative definite and x i is a local maximum point.
Otherwise, f′′(x i) is indefinite and x i is a saddle point.
8. The condition for determining the nature of all stationary points i=N is checked.
If it is satisfied, then the calculations are completed.
If the condition is not met, then i=i+1 is assumed and the transition to step 4 is carried out.

Example #1. Determine the points of local extrema of the function f(x) = x 1 3 - 2x 1 x 2 + x 2 2 - 3x 1 - 2x 2









Since all corner minors are non-zero, the character of x 2 is determined by f′′(x).
Since the matrix f′′(x 2) is positive definite, x 2 is a local minimum point.
Answer: the function f(x) = x 1 3 - 2x 1 x 2 + x 2 2 - 3x 1 - 2x 2 has a local minimum at the point x = (5/3; 8/3).

$E \subset \mathbb(R)^(n)$. It is said that $f$ has local maximum at the point $x_(0) \in E$ if there exists a neighborhood $U$ of the point $x_(0)$ such that for all $x \in U$ the inequality $f\left(x\right) \leqslant f \left(x_(0)\right)$.

The local maximum is called strict , if the neighborhood $U$ can be chosen in such a way that for all $x \in U$ different from $x_(0)$ there is $f\left(x\right)< f\left(x_{0}\right)$.

Definition
Let $f$ be a real function on an open set $E \subset \mathbb(R)^(n)$. It is said that $f$ has local minimum at the point $x_(0) \in E$ if there exists a neighborhood $U$ of the point $x_(0)$ such that for all $x \in U$ the inequality $f\left(x\right) \geqslant f \left(x_(0)\right)$.

A local minimum is said to be strict if the neighborhood $U$ can be chosen so that for all $x \in U$ different from $x_(0)$ $f\left(x\right) > f\left(x_( 0)\right)$.

A local extremum combines the concepts of a local minimum and a local maximum.

Theorem (necessary condition for extremum of a differentiable function)
Let $f$ be a real function on an open set $E \subset \mathbb(R)^(n)$. If at the point $x_(0) \in E$ the function $f$ has a local extremum at this point as well, then $$\text(d)f\left(x_(0)\right)=0.$$ Equality to zero differential is equivalent to the fact that all are equal to zero, i.e. $$\displaystyle\frac(\partial f)(\partial x_(i))\left(x_(0)\right)=0.$$

In the one-dimensional case, this is . Denote $\phi \left(t\right) = f \left(x_(0)+th\right)$, where $h$ is an arbitrary vector. The function $\phi$ is defined for sufficiently small modulo values ​​of $t$. Moreover, with respect to , it is differentiable, and $(\phi)’ \left(t\right) = \text(d)f \left(x_(0)+th\right)h$.
Let $f$ have a local maximum at x $0$. Hence, the function $\phi$ at $t = 0$ has a local maximum and, by Fermat's theorem, $(\phi)' \left(0\right)=0$.
So, we got that $df \left(x_(0)\right) = 0$, i.e. function $f$ at the point $x_(0)$ is equal to zero on any vector $h$.

Definition
The points at which the differential is equal to zero, i.e. those in which all partial derivatives are equal to zero are called stationary. critical points functions $f$ are those points at which $f$ is not differentiable, or its equal to zero. If the point is stationary, then it does not yet follow that the function has an extremum at this point.

Example 1
Let $f \left(x,y\right)=x^(3)+y^(3)$. Then $\displaystyle\frac(\partial f)(\partial x) = 3 \cdot x^(2)$,$\displaystyle\frac(\partial f)(\partial y) = 3 \cdot y^(2 )$, so $\left(0,0\right)$ is a stationary point, but the function has no extremum at this point. Indeed, $f \left(0,0\right) = 0$, but it is easy to see that in any neighborhood of the point $\left(0,0\right)$ the function takes both positive and negative values.

Example 2
The function $f \left(x,y\right) = x^(2) − y^(2)$ has the origin of coordinates as a stationary point, but it is clear that there is no extremum at this point.

Theorem (sufficient condition for an extremum).
Let a function $f$ be twice continuously differentiable on an open set $E \subset \mathbb(R)^(n)$. Let $x_(0) \in E$ be a stationary point and $$\displaystyle Q_(x_(0)) \left(h\right) \equiv \sum_(i=1)^n \sum_(j=1) ^n \frac(\partial^(2) f)(\partial x_(i) \partial x_(j)) \left(x_(0)\right)h^(i)h^(j).$$ Then

  1. if $Q_(x_(0))$ is , then the function $f$ at the point $x_(0)$ has a local extremum, namely, the minimum if the form is positive-definite and the maximum if the form is negative-definite;
  2. if the quadratic form $Q_(x_(0))$ is indefinite, then the function $f$ at the point $x_(0)$ has no extremum.

Let's use the expansion according to the Taylor formula (12.7 p. 292) . Taking into account that the first order partial derivatives at the point $x_(0)$ are equal to zero, we get $$\displaystyle f \left(x_(0)+h\right)−f \left(x_(0)\right) = \ frac(1)(2) \sum_(i=1)^n \sum_(j=1)^n \frac(\partial^(2) f)(\partial x_(i) \partial x_(j)) \left(x_(0)+\theta h\right)h^(i)h^(j),$$ where $0<\theta<1$. Обозначим $\displaystyle a_{ij}=\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}\right)$. В силу теоремы Шварца (12.6 стр. 289-290) , $a_{ij}=a_{ji}$. Обозначим $$\displaystyle \alpha_{ij} \left(h\right)=\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}+\theta h\right)−\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}\right).$$ По предположению, все непрерывны и поэтому $$\lim_{h \rightarrow 0} \alpha_{ij} \left(h\right)=0. \left(1\right)$$ Получаем $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right)=\frac{1}{2}\left.$$ Обозначим $$\displaystyle \epsilon \left(h\right)=\frac{1}{|h|^{2}}\sum_{i=1}^n \sum_{j=1}^n \alpha_{ij} \left(h\right)h_{i}h_{j}.$$ Тогда $$|\epsilon \left(h\right)| \leq \sum_{i=1}^n \sum_{j=1}^n |\alpha_{ij} \left(h\right)|$$ и, в силу соотношения $\left(1\right)$, имеем $\epsilon \left(h\right) \rightarrow 0$ при $h \rightarrow 0$. Окончательно получаем $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right)=\frac{1}{2}\left. \left(2\right)$$ Предположим, что $Q_{x_{0}}$ – положительноопределенная форма. Согласно лемме о положительноопределённой квадратичной форме (12.8.1 стр. 295, Лемма 1) , существует такое положительное число $\lambda$, что $Q_{x_{0}} \left(h\right) \geqslant \lambda|h|^{2}$ при любом $h$. Поэтому $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right) \geq \frac{1}{2}|h|^{2} \left(λ+\epsilon \left(h\right)\right).$$ Так как $\lambda>0$, and $\epsilon \left(h\right) \rightarrow 0$ for $h \rightarrow 0$, then the right side is positive for any vector $h$ of sufficiently small length.
Thus, we have come to the conclusion that in some neighborhood of the point $x_(0)$ the inequality $f \left(x\right) >f \left(x_(0)\right)$ is satisfied if only $x \neq x_ (0)$ (we put $x=x_(0)+h$\right). This means that at the point $x_(0)$ the function has a strict local minimum, and thus the first part of our theorem is proved.
Suppose now that $Q_(x_(0))$ is an indefinite form. Then there are vectors $h_(1)$, $h_(2)$ such that $Q_(x_(0)) \left(h_(1)\right)=\lambda_(1)>0$, $Q_ (x_(0)) \left(h_(2)\right)= \lambda_(2)<0$. В соотношении $\left(2\right)$ $h=th_{1}$ $t>0$. Then we get $$f \left(x_(0)+th_(1)\right)−f \left(x_(0)\right) = \frac(1)(2) \left[ t^(2) \ lambda_(1) + t^(2) |h_(1)|^(2) \epsilon \left(th_(1)\right) \right] = \frac(1)(2) t^(2) \ left[ \lambda_(1) + |h_(1)|^(2) \epsilon \left(th_(1)\right) \right].$$ For sufficiently small $t>0$, the right side is positive. This means that in any neighborhood of the point $x_(0)$ the function $f$ takes values ​​$f \left(x\right)$ greater than $f \left(x_(0)\right)$.
Similarly, we obtain that in any neighborhood of the point $x_(0)$ the function $f$ takes values ​​less than $f \left(x_(0)\right)$. This, together with the previous one, means that the function $f$ does not have an extremum at the point $x_(0)$.

Let us consider a particular case of this theorem for a function $f \left(x,y\right)$ of two variables defined in some neighborhood of the point $\left(x_(0),y_(0)\right)$ and having continuous partial derivatives of the first and second orders. Let $\left(x_(0),y_(0)\right)$ be a stationary point and let $$\displaystyle a_(11)= \frac(\partial^(2) f)(\partial x ^(2)) \left(x_(0) ,y_(0)\right), a_(12)=\frac(\partial^(2) f)(\partial x \partial y) \left(x_( 0), y_(0)\right), a_(22)=\frac(\partial^(2) f)(\partial y^(2)) \left(x_(0), y_(0)\right ).$$ Then the previous theorem takes the following form.

Theorem
Let $\Delta=a_(11) \cdot a_(22) − a_(12)^2$. Then:

  1. if $\Delta>0$, then the function $f$ has a local extremum at the point $\left(x_(0),y_(0)\right)$, namely, a minimum if $a_(11)>0$ , and maximum if $a_(11)<0$;
  2. if $\Delta<0$, то экстремума в точке $\left(x_{0},y_{0}\right)$ нет. Как и в одномерном случае, при $\Delta=0$ экстремум может быть, а может и не быть.

Examples of problem solving

Algorithm for finding the extremum of a function of many variables:

  1. We find stationary points;
  2. We find the differential of the 2nd order at all stationary points
  3. Using the sufficient condition for the extremum of a function of several variables, we consider the second-order differential at each stationary point
  1. Investigate the function to the extremum $f \left(x,y\right) = x^(3) + 8 \cdot y^(3) + 18 \cdot x — 30 \cdot y$.
    Solution

    Find partial derivatives of the 1st order: $$\displaystyle \frac(\partial f)(\partial x)=3 \cdot x^(2) — 6 \cdot y;$$ $$\displaystyle \frac(\partial f)(\partial y)=24 \cdot y^(2) — 6 \cdot x.$$ Compose and solve the system: $$\displaystyle \begin(cases)\frac(\partial f)(\partial x) = 0\\\frac(\partial f)(\partial y)= 0\end(cases) \Rightarrow \begin(cases)3 \cdot x^(2) - 6 \cdot y= 0\\24 \cdot y^(2) - 6 \cdot x = 0\end(cases) \Rightarrow \begin(cases)x^(2) - 2 \cdot y= 0\\4 \cdot y^(2) - x = 0 \end(cases)$$ From the 2nd equation, we express $x=4 \cdot y^(2)$ — substitute into the 1st equation: $$\displaystyle \left(4 \cdot y^(2)\right )^(2)-2 \cdot y=0$$ $$16 \cdot y^(4) — 2 \cdot y = 0$$ $$8 \cdot y^(4) — y = 0$$ $$y \left(8 \cdot y^(3) -1\right)=0$$ As a result, 2 stationary points are obtained:
    1) $y=0 \Rightarrow x = 0, M_(1) = \left(0, 0\right)$;
    2) $\displaystyle 8 \cdot y^(3) -1=0 \Rightarrow y^(3)=\frac(1)(8) \Rightarrow y = \frac(1)(2) \Rightarrow x=1 , M_(2) = \left(\frac(1)(2), 1\right)$
    Let us check the fulfillment of the sufficient extremum condition:
    $$\displaystyle \frac(\partial^(2) f)(\partial x^(2))=6 \cdot x; \frac(\partial^(2) f)(\partial x \partial y)=-6; \frac(\partial^(2) f)(\partial y^(2))=48 \cdot y$$
    1) For point $M_(1)= \left(0,0\right)$:
    $$\displaystyle A_(1)=\frac(\partial^(2) f)(\partial x^(2)) \left(0,0\right)=0; B_(1)=\frac(\partial^(2) f)(\partial x \partial y) \left(0,0\right)=-6; C_(1)=\frac(\partial^(2) f)(\partial y^(2)) \left(0,0\right)=0;$$
    $A_(1) \cdot B_(1) - C_(1)^(2) = -36<0$ , значит, в точке $M_{1}$ нет экстремума.
    2) For point $M_(2)$:
    $$\displaystyle A_(2)=\frac(\partial^(2) f)(\partial x^(2)) \left(1,\frac(1)(2)\right)=6; B_(2)=\frac(\partial^(2) f)(\partial x \partial y) \left(1,\frac(1)(2)\right)=-6; C_(2)=\frac(\partial^(2) f)(\partial y^(2)) \left(1,\frac(1)(2)\right)=24;$$
    $A_(2) \cdot B_(2) — C_(2)^(2) = 108>0$, so there is an extremum at the point $M_(2)$, and since $A_(2)>0$, then this is the minimum.
    Answer: The point $\displaystyle M_(2) \left(1,\frac(1)(2)\right)$ is the minimum point of the function $f$.

  2. Investigate the function for the extremum $f=y^(2) + 2 \cdot x \cdot y - 4 \cdot x - 2 \cdot y - 3$.
    Solution

    Find stationary points: $$\displaystyle \frac(\partial f)(\partial x)=2 \cdot y - 4;$$ $$\displaystyle \frac(\partial f)(\partial y)=2 \cdot y + 2 \cdot x — 2.$$
    Compose and solve the system: $$\displaystyle \begin(cases)\frac(\partial f)(\partial x)= 0\\\frac(\partial f)(\partial y)= 0\end(cases) \ Rightarrow \begin(cases)2 \cdot y - 4= 0\\2 \cdot y + 2 \cdot x - 2 = 0\end(cases) \Rightarrow \begin(cases) y = 2\\y + x = 1\end(cases) \Rightarrow x = -1$$
    $M_(0) \left(-1, 2\right)$ is a stationary point.
    Let's check the fulfillment of the sufficient extremum condition: $$\displaystyle A=\frac(\partial^(2) f)(\partial x^(2)) \left(-1,2\right)=0; B=\frac(\partial^(2) f)(\partial x \partial y) \left(-1,2\right)=2; C=\frac(\partial^(2) f)(\partial y^(2)) \left(-1,2\right)=2;$$
    $A \cdot B - C^(2) = -4<0$ , значит, в точке $M_{0}$ нет экстремума.
    Answer: there are no extrema.

Time limit: 0

Navigation (job numbers only)

0 of 4 tasks completed

Information

Take this quiz to test your knowledge of the topic you just read, Local Extrema of Functions of Many Variables.

You have already taken the test before. You cannot run it again.

Test is loading...

You must login or register in order to start the test.

You must complete the following tests to start this one:

results

Correct answers: 0 out of 4

Your time:

Time is over

You scored 0 out of 0 points (0 )

Your score has been recorded on the leaderboard

  1. With an answer
  2. Checked out

    Task 1 of 4

    1 .
    Number of points: 1

    Investigate the function $f$ for extrema: $f=e^(x+y)(x^(2)-2 \cdot y^(2))$

    Correctly

    Not properly

  1. Task 2 of 4

    2 .
    Number of points: 1

    Does the function $f = 4 + \sqrt((x^(2)+y^(2))^(2))$

Similar posts