Reducing swamp behavior for the canonical polyadic decomposition problem by rank-1 freezing
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
A novel optimization framework is proposed for solving the low-rank tensor approximation problem using the canonical polyadic decomposition (CPD). This can be a difficult optimization problem for certain tensors, e.g., due to degeneracy, i.e., a tensor that can be approximated arbitrarily closely by an ill-conditioned tensor of lower rank. This is one of the phenomena that are encountered in regions of slow convergence, informally known as swamps. Numerical experiments with state-of-the-art optimization algorithms indicate that in a swamp often only a few rank-1 terms are modified while others stagnate. Often, the non-stagnant terms are problematic and form an ill-conditioned decomposition. To address this, we propose to temporarily freeze the stagnant terms. The lower number of terms has several benefits: it simplifies the problem by reducing the number of variables and reduces the cost per iteration significantly. Furthermore, in many cases the residual tensor can be compressed, and an algebraic (re)initialization can be carried out, even if this was not possible for the original tensor. A refinement step can further improve the accuracy if desired. We provide theoretical insights of why terms can stagnate. More specifically, we prove that terms that are close to the solution are not modified anymore in further optimization steps under certain assumptions. Extensive numerical experiments show that our framework greatly facilitates escaping from swamps. The resulting algorithm outperforms current state-of-the-art approaches on difficult-to-decompose tensors, both in accuracy and computation time, and has similar performance on easier problems.