Consensus Optimization

Consensus Matrix

consensus.matrix.gen_matrix(n, edges)

See Eq4 in [Na Li et al].

Based on edge set of graph, return a feasible consensus matrix.

Parameters
  • n (int) – number of vertex

  • edges (List[List[int]]) – The list for edge pair

Returns

consensus matrix

Return type

numpy.array

consensus.matrix.w_feasible(W)

Check if W is a feasible consensus matrix. The consenus should satisfied following constraints:

\[W = W^T \quad \mathcal{1}^T W = \mathcal{1}^T\]
Parameters

W (numpy.array) – consensus matrix

Return type

bool

consensus.matrix.ring_matrix(n)

return a ring-topology matrix

Parameters

n (int) – int, number of nodes

Returns

consensus matrix

Return type

numpy.array

Consensus Algorithms

In [Shuo’s paper] , it mentioned two type of algorithms for solve the consensus optimization problem. Here we take least-square as a example,

\[\begin{split}\min \quad \sum_{i=1}^N{||A_i x + b_i||^2_2} \\ A_i \in \mathbb{R}^{m_i \times n} \quad b_i \in \mathbb{R}^{m_i}\end{split}\]

We let \(f_i(x_i) = ||A_i x_i + b_i||^2_2\), therefore we have,

\[\begin{split}\min \quad &\sum_{i=1}^N f_i(x_i) \\ & x_i - z = 0\end{split}\]

Solving by ADMM, we have

\[\begin{split}x^{k+1}_i &= (A_i^T A_i + \rho \cdot I)^{-1} (A_i^T b + \bar{x}^k - y^k_i) \\ y^{k+1}_i &= y_i^k + \rho \cdot (x^{k+1} - \bar{x}^{k+1})\end{split}\]

We build a consensus module for solving consensus optimization under MPI platform, you could use following methods to achieve \(\bar{x}\).

Centralized Algorithm

In our consensus library, mpireduce is similar with average function.

consensus.mpireduce(target, func=np.mean, filename=None)

Centralized consensus module based on MPI, similar with reduce op on MPI. You could define your func as a reduce function, such as doing SVD on the gather matrix, then boardcast the maximum sigma to every agents. The filename needs to define if you want to storage the gather matrix, it will be placed under “./tmp” folder

Parameters
  • target (numpy.array) – the consensus object

  • func (function) – reduce op

  • filename (str) – the filename

Returns

average result

Return type

numpy.array

Example code is at admm.py.

Decentralized Algorithm

Given a consensus matrix \(W\), we need to build a GCon for each variable which needs to average.

class consensus.GCon

Decentralized consensus module based on MPI

__init__(self, features, w)
Parameters
  • features (int) – dimention

  • w (numpy.array) – consensus matrix

__call__(self, x)
Parameters

x (numpy.array) – a features dimention vector

Returns

average result

Return type

numpy.array

Example code is at admm_dc.py, we also provide a gradient descent version gd_dc.py.

Application

Parallel SVM

We followed [Boyd et al] and implement corresponding algorithm on Python.

Giving the \(X \in \mathbb{R}^{m_i \times n}, y \in \{+1, -1\}^{m_i}, \lambda \in \mathbb{R}\), the SVM optimization problem is following:

\[\min \quad \frac{1}{2 \lambda} || \omega ||^2_2 + \sum_{i=1}^{m_i} \biggl[ 1 - y_i(\omega^T x_i+b)\biggr]^+\]

We slightly modify the L2-regularizer term to \(||[w, b]^T||^2_2\). Let

\[f = \bigl[ \omega, b \bigr]^T, A_i \in \mathbb{R}^{m_i \times (n+1)}, (A_i)_j = \bigl[ -y_j \cdot x_j, -y_j \bigr]\]

Therefore, we would have the following problem:

\[\begin{split}\min \quad &\sum_{i=1}^N \mathbb{1}^T \bigl[ 1 + A_i \cdot f_i \bigr]^+ + \frac{1}{2 \lambda} || z ||^2_2 \\ & s.t \quad f_i - z = 0\end{split}\]

Solving by ADMM algorithm, we have updates on [p66 8.2.3].

Warning

The currect z-update is \(z^{k+1} := \frac{N \rho}{(1/ \lambda) + N \rho}(\bar{x}^{k+1}+\bar{u}^k)\).

Centralized

See example code svm.py.

Decentralized

See example code svm_dc.py, Details are svm_view.ipynb.