少点错误 2024年09月12日
Optimising under arbitrarily many constraint equations
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了在一些约束条件下优化多元函数的方法,如f = x² + y² + z²,约束条件为g₁ = x² + y² - z = 0和g₂ = y + z - 1 = 0,介绍了拉格朗日乘数法及另一种可能更直接但计算较繁琐的方法,并通过具体计算进行了分析。

🧐拉格朗日乘数法是常见的优化多元函数的方法,为每个约束函数添加一个变量λ,通过一系列方程求解λ及输入变量x, y, z,但会引入额外变量。

💡文章提到了另一种可能更直接的方法,虽然计算可能较繁琐且不太常被教授,作者认为这种方法可能是欧拉或其他人先发现的。

📋通过具体计算,如对函数f = x² + y² + z²的梯度[2x, 2y, 2z],以及约束函数g₁, g₂的梯度进行分析,得出不同情况下的变量值。

🎯文章还探讨了在不同约束条件下,如一个约束或两个约束时,拉格朗日乘数法与行列式的关系,并通过示例进行了验证。

Published on September 12, 2024 2:59 PM GMT

Say we have a multivariate function to optimise, like , under some constraints, like  and , both to equal zero.

The common method is that of Lagrange multipliers.

    Add a variable  for each constraint function — here, we'll use  and .Declare the set of equations .Bring in the equations  and  (etc, if there are more constraints).Solve for  and, more importantly, the inputs .

Lagrange multipliers annoy me, insofar as they introduce extra variables. There is another way — arguably more direct, if perhaps more tedious in calculation and less often taught. I found it alone, tho surely someone else did first — probably Euler.

Lagrange, anyway

For the sake of a standard answer to check against, let's use Lagrange multipliers.

The gradient of  is . Likewise, , and . So step 2 gives these equations:

It readily follows that  or .

If , then , and . By the second constraint, , find that . By the first constraint, , find that , which is a contradiction for real inputs.

If , then, by the first constraint, , and, by the second constraint, , so  and .

Determinants

With one constraint, the method of Lagrange multipliers reduces to  and  are vectors, which differ by a scalar factor iff they point in the same (or directly opposite) directions iff (for three dimensions) the cross product  iff (for two dimensions) the two-by-two determinant .

With two constraints, the method asks when . That would mean  is a linear combination of  and , which it is iff , and  are all coplanar iff (for three dimensions) the three-by-three determinant .

As it happens, the cross product is a wolf that can wear determinant's clothing. Just fill one column with basis vectors: .

Likewise, with zero constraints, the "method of Lagrange multipliers" — really, the first-derivative test — asks when . Fill a three-by-three matrix with two columns of basis vectors: . Suppose the basis vectors multiply like the cross product, as in geometric algebra. Then the determinant, rather than the usual 0 for a matrix with two equal columns, turns out to equal that ordinary column vector  (up to a scalar constant).

In every scenario so far — and I claim this holds for higher dimensions and more constraints — the core equations to optimise under constraints are the actual constraint equations, along with a single determinant. The matrix has its columns filled with the gradient of the function to optimise, each constraint gradient, and copies of the basis vectors, in order, to make it square.

§ Example

Fill a matrix with those gradients given above. We'll take its determinant.

The determinant, when simplified, is . The equations to consider are just

The first tells us that  or . If , so , so , and . If , then  and  is imaginary. These are the same results as above; the method works, using only the variables given in the problem.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

多元函数优化 拉格朗日乘数法 行列式 约束条件
相关文章