Strong Duality and Slater’s Condition
Duality gap and strong duality
Blog1 introduces the weak duality in optimization: let \(p^*\) and \(d^*\) denote the optimal value of the primal optimization problem (minimization) and dual optimization problem (maximization), respectively, then the weak duality expresses such a fact:
\[d^*\le p^*\notag\]which inequality always holds for any any primal/dual optimization problems2, and the difference between the primal and dual optimal values, i.e., \(p^*-d^*\ge0\), is sometimes called as duality gap34.
Furthermore, for some special optimization problems, the duality gap is zero, i.e.,
\[d^*=p^*\notag\]which means the strong duality holds (i.e., the duality gap is zero iff. strong duality holds).
Slater’s condition and Slater’s theorem
Next, a very natural question is, what kind of conditions guarantee the strong duality?
The following text, we’ll discuss those conditions for convex optimization problems. Consider a convex optimization problem5:
\[\begin{split} \min\ &f(\boldsymbol{\mathrm{x}})\\ \text{s.t. } &g_i(\boldsymbol{\mathrm{x}})\le0,\quad i=1,\cdots,m\\ &h_i(\boldsymbol{\mathrm{x}})=0,\quad i=1,\cdots,p\\ \end{split}\label{eq1}\]Those conditions that guarantee strong duality for convex optimization problems are called as constraint qualifications2, and one of constraint qualifications is Salter’s theorem267 (or rather, Slater’s condition is a specific example of a constraint qualification8).
Firstly, for optimization $\eqref{eq1}$, the Slater’s condition is that2:
There exists a primal feasible $\boldsymbol{\mathrm{x}}$ for which each inequality constraint is strictly satisfied, i.e.,
\[g_i(\boldsymbol{\mathrm{x}})<0,\ \forall i\]In another word, Slater’s condition states that, there exists a feasible $\boldsymbol{\mathrm{x}}$ for which each inequality constraint is not active. For example, the following case satisfies the Slater’s condition:
The opposite of Slater’s condition is, there isn’t any feasible solution that can strictly satisfied each inequality constraint, or, for all feasible solutions, we have $g_i(\boldsymbol{\mathrm{x}})=0$ for some $g_i$. For example, the following case doesn’t meet the Slater’s condition:
Then, based on the Slater’s condition, we have Slater’s theorem6:
If the problem is convex and Slater’s condition is satisfied, then strong duality holds.
And based on which we can also get that, if for all feasible solutions, $g_i(\boldsymbol{\mathrm{x}})=0$ for some $i$ (there always some constraints are active), then the strong duality doesn’t always hold (possibly holds or possibly not). So, the Slater’s condition is a sufficient condition of strong duality of convex optimization problems.
Here is an example. For the optimization,
\[\begin{split} \min_{x}\ &x^2\\ \text{s.t. }\ & \left\{ \begin{split} &2-x\le0\\ &x-4\le0 \end{split}\right. \end{split}\notag\]it is a convex optimization, and when $x=3$ we have $g_1(x)<0$ and $g_2(x)<0$, which therefore satisfies the Slater’s condition. As a result, we can say the strong duality holds for this optimization.
References
-
Review Notes and Supplementary Notes CS229 Course Machine Learning Standford University, Convex Optimization Overview (cnt’d), Chuong B. Do, October 26, 2007, pp. 5-6. ˄ ˄2 ˄3 ˄4
-
Convex Optimization Problem: “For a convex optimization problem, all locally optimal points are globally optima.”. ˄