Dictionary learning is a classic representation learning method that has been widely applied in signal processing and data analytics. In this paper, we investigate a family of ℓp-norm (p>2,p∈ℕ) maximization approaches for the complete dictionary learning problem from theoretical and algorithmic aspects. Specifically, we prove that the global maximizers of these formulations are very close to the true dictionary with high probability, even when Gaussian noise is present. Based on the generalized power method (GPM), an efficient algorithm is then developed for the ℓp-based formulations. We further show the efficacy of the developed algorithm: for the population GPM algorithm over the sphere constraint, it first quickly enters the neighborhood of a global maximizer, and then converges linearly in this region. Extensive experiments will demonstrate that the ℓp-based approaches enjoy a higher computational efficiency and better robustness than conventional approaches and p=3 performs the best.
Authors: Yifei Shen, Ye Xue, Jun Zhang, Khaled B. Letaief, Vincent Lau (Hong Kong University of Science and Technology, The Hong Kong Polytechnic University)