Conjugate Gradient Iterative Hard Thresholding

The development of this software and the performance analysis of NIHT, HTP, and CSMPSP has led to the development of a new algorithm which exploits the computational advantages of these algorithms. The CGIHT family of algorithms is introduced in the following paper along with theoretical, uniform recovery guarantees and extensive empirical testing.

CGIHT: Conjugate Gradient Iterative Hard Thresholding for compressed sensing and matrix completion
J.D. Blanchard, J. Tanner, K. Wei
Information and Inference: a journal of the IMA, in press, 2015.
DowNload
The performance of the algorithms under the model y=Ax+e for a misfit or noise vector e is provided in the following paper.

Conjugate Gradient Iterative Hard Thresholding: observed noise stability in compressed sensing
J.D. Blanchard, J. Tanner, K. Wei
IEEE Transactions on Signal Processing, 63(2): 528-537, 2015.

Expander L0 Decoding

R. Mendoza-Smith, J. Tanner
Submitted 2015.
Abstract:
We introduce two new algorithms, Serial-L0 and Parallel-L0 for solving a large underdetermined linear system of equations y=Ax when it is known that the solution is k-sparse and that A is the adjacency matrix of an unbalanced left d-regular expander graph with m rows and n columns. The matrices in this class are sparse and allow a highly efficient implementation. A number of algorithms have been designed to work exclusively under this setting, composing the branch of combinatorial compressed-sensing (CCS).
Serial-L0 and Parallel-L0 iteratively minimize the residual and guarantee convergence in O(dn log k) operations by assuming that the signal is dissociated, meaning that all of the subset sums of the support of the signal are pairwise different. However, we observe empirically that the signal need not be exactly dissociated in practice. Moreover, we observe Serial-L0 and Parallel-L0 to be able to solve large-scale problems with a larger fraction of nonzeros than other algorithms when the number of measurements is substantially less than the signal length; in particular, they are able to reliably solve for an n-dimensional k-sparse vector from m expander measurements with n/m = 1000 and k/m up to four times greater than what is achievable by L1-regularization from dense Gaussian measurements. Additionally, due to their low computational complexity, Serial-L0 and Parallel-L0 are observed to be able to solve large problems sizes in substantially less time than other algorithms for compressed sensing. In particular, Parallel-L0 is structured to take advantage of massively parallel architectures.
DowNload