World Library  
Flag as Inappropriate
Email this Article

Lagrangian relaxation

Article Id: WHEBN0006348084
Reproduction Date:

Title: Lagrangian relaxation  
Author: World Heritage Encyclopedia
Language: English
Subject: Relaxation (approximation), Mathematical optimization, Ivar Ekeland, Math
Collection:
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Lagrangian relaxation

In the field of mathematical optimization, Lagrangian relaxation is a relaxation method which approximates a difficult problem of constrained optimization by a simpler problem. A solution to the relaxed problem is an approximate solution to the original problem, and provides useful information.

The method penalizes violations of inequality constraints using a Lagrange multiplier, which imposes a cost on violations. These added costs are used instead of the strict inequality constraints in the optimization. In practice, this relaxed problem can often be solved more easily than the original problem.

The problem of maximizing the Lagrangian function of the dual variables (the Lagrangian multipliers) is the Lagrangian dual problem.

Mathematical description

Given a linear programming problem x\in \mathbb{R}^n and A\in \mathbb{R}^{m,n} of the following form:

max c^T x
s.t.
Ax \le b

If we split the constraints in A such that A_1\in \mathbb{R}^{m_1,n}, A_2\in \mathbb{R}^{m_2,n} and m_1+m_2=m we may write the system:

max c^T x
s.t.
(1) A_1 x \le b_1
(2) A_2 x \le b_2

We may introduce the constraint (2) into the objective:

max c^T x+\lambda^T(b_2-A_2x)
s.t.
(1) A_1 x \le b_1

If we let \lambda=(\lambda_1,\ldots,\lambda_{m_2}) be nonnegative weights, we get penalized if we violate the constraint (2), and we are also rewarded if we satisfy the constraint strictly. The above system is called the Lagrangian Relaxation of our original problem.

The LR solution as a bound

Of particular use is the property that for any fixed set of \tilde{\lambda} values, the optimal result to the Lagrangian Relaxation problem will be no smaller than the optimal result to the original problem. To see this, let \hat{x} be the optimal solution to the original problem, and let \bar{x} be the optimal solution to the Lagrangian Relaxation. We can then see that

c^T \hat{x} \leq c^T \hat{x} +\tilde{\lambda}^T(b_2-A_2 \hat{x} ) \leq c^T \bar{x} +\tilde{\lambda}^T(b_2-A_2 \bar{x} )

The first inequality is true because \hat{x} is feasible in the original problem and the second inequality is true because \bar{x} is the optimal solution to the Lagrangian Relaxation.

Iterating towards a solution of the original problem

The above inequality tells us that if we minimize the maximum value we obtain from the relaxed problem, we obtain a tighter limit on the objective value of our original problem. Thus we can address the original problem by instead exploring the partially dualized problem

min P(\lambda) s.t. \lambda \geq 0

where we define P(\lambda) as

max c^T x+\lambda^T(b_2-A_2x)
s.t.
(1) A_1 x \le b_1

A Lagrangian Relaxation algorithm thus proceeds to explore the range of feasible \lambda values while seeking to minimize the result returned by the inner P problem. Each value returned by P is a candidate upper bound to the problem, the smallest of which is kept as the best upper bound. If we additionally employ a heuristic, probably seeded by the \bar{x} values returned by P, to find feasible solutions to the original problem, then we can iterate until the best upper bound and the cost of the best feasible solution converge to a desired tolerance.

Related methods

The Augmented Lagrangian method is quite similar in spirit to the Lagrangian relaxation method, but adds an extra term, and updates the dual parameters \lambda in a more principled manner. It was introduced in the 1970s and has been used extensively.

The Penalty method doesn't use dual variables but rather removes the constraints and instead penalizes deviations from the constraint. The method is conceptually simple but usually Augmented Lagrangian methods are preferred in practice since the penalty method suffers from ill-conditioning issues.

References

Books

  • Bertsekas, Dimitri P. (1999). Nonlinear Programming: 2nd Edition. Athena Scientific. ISBN 1-886529-00-0.

Articles

This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 


Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.