Back To ADMM Home.

Consider the constrained convex optimization problems:

minimizef(x)subject toxC

with variable xRn, f and C are convex. This problem could be rewrite into :

minimizef(x)+g(z)subject toxz=0

Where g is the indicator function of set C.

The augmented lagrangian (using the scaled dual variable) is :

Lρ(x,z,u)=f(x)+g(z)+(ρ/2)xz+u22

The scaled form ADMM updates are :

xk+1:=argminc(f(x)+(ρ/2)xzk+uk22)zk+1:=ΠC(xk+1+uk)uk+1:=uk+xk+1zk+1

With the primal residual and dual residual:

rk=xkzk,sk=ρ(zkzk1).

5.1 Quadratic Programming

Consider QP problem:

minimize(1/2)xTPx+qTxsubject toAx=b, x0

We form the functions f and g :

f(x)=(1/2)xTPx+qTx,domf={xAx=b} g(z)=IC(z),C={xx0}

Then the ADMM form of the problem is :

minimizef(x)+g(z)subject toxz=0

The ADMM update will be :

xk+1:=argminxdomf(f(x)+(ρ/2)xzk+uk22)zk+1:=ΠC(xk+1+uk)=(xk+1+uk)+uk+1:=uk+xk+1zk+1

The update of X could be reform into a linear equation with an addition dual variable, using the first order optimal condition:

[P+ρIATA0][x+λ]+[qρ(zkuk)b]=0

5.2 Test LP

Here we test the following LP problem:

minimizecTxsubject toAx=b, x0

Using xR500 and with AR400×500, with 500 variables, and 400 equality constraints.

Code and Example.

We have the result of ADMM:

And the comparsion:

method optimal value cpu time(s)
CVX 371.09 1.73
ADMM 370.96 2.01

5.3 Test QP

minimize(1/2)xTPx+qTxsubject toAx=b, x0

Using xR500 and with AR400×500, with 500 variables, and 400 equality constraints.

Code and Example.

With x update of matlab :

  % x-update
  if k > 1
      tmp_b = [ rho*(z - u) - q; b ];
      tmp = U \ (L \ tmp_b);
      x = tmp(1:n);
  else
      tmp_A = [ P + rho*eye(n), A'; A, zeros(m) ];
      [L, U] = lu(tmp_A);
      tmp_b = [ rho*(z - u) - q; b ];
      tmp = U \ (L \ tmp_b);
      x = tmp(1:n);
  end

We have the result of ADMM:

And the comparsion:

method optimal value cpu time(s)
CVX 351.98 21.5182
ADMM 348.82 0.166

5.4 Polyhedra Intersectio

Here we consider the problem to find the intersection of two polyhedra, which is to solve the follwoing feaibility problem:

findxsubject toA1xb1, A2xb2

Which is equivalent to solve the following optimization problem:

minimizeIC1(x)+IC2(x)whereC1={xA1xb1}, C2={xA2xb2}

We can then reform it into ADMM expression:

minimizeIC1(x)+IC2(z)subject toxz=0

Then we can have that the updates are :

xk+1:=argminx(IC1(x)+(ρ/2)xzk+uk22)zk+1:=argminz(IC2(x)+(ρ/2)xk+1z+uk22)uk+1:=uk+xk+1zk+1

As we know the proximity operator of the indicator function is the projection operator, we have the updates:

xk+1:=ΠC1(zkuk)zk+1:=ΠC2(xk+1+uk)uk+1:=uk+xk+1zk+1

x update is to solve the following convex optimization problem:

minimizex(zkuk)subject toA1xb1

With matlab code using cvx:

  % use cvx to find point in first polyhedra
  cvx_begin quiet
      variable x(n)
      minimize (sum_square(x - (z - u)))
      subject to
          A1*x <= b1
  cvx_end

z update is to solve :

minimizez(xk+1+uk)subject toA2zb2

With matlab code using cvx:

  % use cvx to find point in second polyhedra
  cvx_begin quiet
      variable z(n)
      minimize (sum_square(x - (z - u)))
      subject to
          A2*z <= b2
  cvx_end

We compare the ADMM method with the alternating projections algorithm, Result:

  ADMM : Elapsed time is 4.43757 seconds.
  Alternating projections : Elapsed time is 5.321952 seconds.

Back To ADMM Home.