Consider the constrained convex optimization problems:
minimizef(x)subject tox∈Cwith variable x∈Rn, f and C are convex. This problem could be rewrite into :
minimizef(x)+g(z)subject tox−z=0Where g is the indicator function of set C.
The augmented lagrangian (using the scaled dual variable) is :
Lρ(x,z,u)=f(x)+g(z)+(ρ/2)‖x−z+u‖22The scaled form ADMM updates are :
xk+1:=argminc(f(x)+(ρ/2)‖x−zk+uk‖22)zk+1:=ΠC(xk+1+uk)uk+1:=uk+xk+1−zk+1With the primal residual and dual residual:
rk=xk−zk,sk=−ρ(zk−zk−1).5.1 Quadratic Programming
Consider QP problem:
minimize(1/2)xTPx+qTxsubject toAx=b, x≥0We form the functions f and g :
f(x)=(1/2)xTPx+qTx,domf={x∣Ax=b} g(z)=IC(z),C={x‖x≥0}Then the ADMM form of the problem is :
minimizef(x)+g(z)subject tox−z=0The ADMM update will be :
xk+1:=argminx∈domf(f(x)+(ρ/2)‖x−zk+uk‖22)zk+1:=ΠC(xk+1+uk)=(xk+1+uk)+uk+1:=uk+xk+1−zk+1The update of X could be reform into a linear equation with an addition dual variable, using the first order optimal condition:
[P+ρIATA0][x+λ]+[q−ρ(zk−uk)−b]=05.2 Test LP
Here we test the following LP problem:
minimizecTxsubject toAx=b, x≥0Using x∈R500 and with A∈R400×500, with 500 variables, and 400 equality constraints.
We have the result of ADMM:

And the comparsion:
method | optimal value | cpu time(s) |
CVX | 371.09 | 1.73 |
ADMM | 370.96 | 2.01 |
5.3 Test QP
minimize(1/2)xTPx+qTxsubject toAx=b, x≥0Using x∈R500 and with A∈R400×500, with 500 variables, and 400 equality constraints.
With x update of matlab :
% x-update
if k > 1
tmp_b = [ rho*(z - u) - q; b ];
tmp = U \ (L \ tmp_b);
x = tmp(1:n);
else
tmp_A = [ P + rho*eye(n), A'; A, zeros(m) ];
[L, U] = lu(tmp_A);
tmp_b = [ rho*(z - u) - q; b ];
tmp = U \ (L \ tmp_b);
x = tmp(1:n);
end
We have the result of ADMM:

And the comparsion:
method | optimal value | cpu time(s) |
CVX | 351.98 | 21.5182 |
ADMM | 348.82 | 0.166 |
5.4 Polyhedra Intersectio
Here we consider the problem to find the intersection of two polyhedra, which is to solve the follwoing feaibility problem:
findxsubject toA1x≤b1, A2x≤b2Which is equivalent to solve the following optimization problem:
minimizeIC1(x)+IC2(x)whereC1={x∣A1x≤b1}, C2={x∣A2x≤b2}We can then reform it into ADMM expression:
minimizeIC1(x)+IC2(z)subject tox−z=0Then we can have that the updates are :
xk+1:=argminx(IC1(x)+(ρ/2)‖x−zk+uk‖22)zk+1:=argminz(IC2(x)+(ρ/2)‖xk+1−z+uk‖22)uk+1:=uk+xk+1−zk+1As we know the proximity operator of the indicator function is the projection operator, we have the updates:
xk+1:=ΠC1(zk−uk)zk+1:=ΠC2(xk+1+uk)uk+1:=uk+xk+1−zk+1x update is to solve the following convex optimization problem:
minimizex−(zk−uk)subject toA1x≤b1With matlab code using cvx:
% use cvx to find point in first polyhedra
cvx_begin quiet
variable x(n)
minimize (sum_square(x - (z - u)))
subject to
A1*x <= b1
cvx_end
z update is to solve :
minimizez−(xk+1+uk)subject toA2z≤b2With matlab code using cvx:
% use cvx to find point in second polyhedra
cvx_begin quiet
variable z(n)
minimize (sum_square(x - (z - u)))
subject to
A2*z <= b2
cvx_end
We compare the ADMM method with the alternating projections algorithm, Result:
ADMM : Elapsed time is 4.43757 seconds.
Alternating projections : Elapsed time is 5.321952 seconds.
