Abstract:
Low-rank matrix factorization finds applications in large number of problems in signal processing and machine learning. Stochastic gradient descent (SGD) is a standard technique for solving large scale matrix factorization problems. However, SGD can only solve the problem when the matrix is directly observed. There are several applications where the matrix is not directly observed but is only available via linear projections. In such a case the problem is to estimate the low-rank matrix from its projections. To solve such problems, we have derived an algorithm based on the majorization minimization approach – this effectively decouples the projection from the matrix factorization problem via Landweber iterations. After decoupling, the factorization problem can be solved efficiently using SGD. The original matrix factorization problem decomposed a matrix into two dense matrices. In recent times, several applications required being factored into a dense and sparse component. We recast such problems into the original low-rank matrix factorization framework via the Iterative Reweighted Least Squares technique. The proposed algorithm is applied to two problems. The first one is on collaborative filtering where the problem is to predict user’s choices on unrated items based on the user’s and other user’s choices on similar items. Traditionally this problem was to recast into a matrix factorization problem via latent factor modeling. However, we noticed that significantly better recovery is achieved when instead of factoring the matrix into two dense matrices; it is factored into a dense and a sparse matrix. In this thesis we have explained this phenomenon intuitively. The second problem we look at is a dynamic MRI recovery problem. Here the problem is to recover a sequence of dynamic MRI frames from the observed samples. This turns out to be a low-rank recovery problem owing to the spatial correlations among successive frames. However, upon closer look it turns out that the spatial correlation can be modeled as a smoothly varying function which in turn has a compact support in the Fourier domain. Thus, this too can be recast as a matrix factorization into sparse and dense components. Our approach does not improve the accuracy of recovery but helps achieve the goal in a more efficient fashion.