Sifting Common Information from Many Variables
Abstract
Measuring the relationship between any pair of variables is a rich and active area of research that is central to scientific practice. In contrast, characterizing the common information among any group of variables is typically a theoretical exercise with few practical methods for highdimensional data. A promising solution would be a multivariate generalization of the famous Wyner common information, but this approach relies on solving an apparently intractable optimization problem. We leverage the recently introduced information sieve decomposition to formulate an incremental version of the common information problem that admits a simple fixed point solution, fast convergence, and complexity that is linear in the number of variables. This scalable approach allows us to demonstrate the usefulness of common information in highdimensional learning problems. The sieve outperforms standard methods on dimensionality reduction tasks, solves a blind source separation problem that cannot be solved with ICA, and accurately recovers structure in brain imaging data.
 Publication:

arXiv eprints
 Pub Date:
 June 2016
 arXiv:
 arXiv:1606.02307
 Bibcode:
 2016arXiv160602307V
 Keywords:

 Statistics  Machine Learning;
 Computer Science  Information Theory
 EPrint:
 In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI17). 8 pages, 7 figures. v4: Typos