Sparse Correspondence Analysis for Contingency Tables
Abstract
Since the introduction of the lasso in regression, various sparse methods have been developed in an unsupervised context like sparse principal component analysis (s-PCA) and sparse singular value decomposition (s-SVD). One advantage of s-PCAis to simplify the interpretation of the (pseudo) principal components since each one isexpressed as a linear combination of a small number of variables. The disadvantages lie on the one hand in the difficulty of choosing the number of non-zero coefficients in the absence of a well established criterion and on the other hand in the loss of orthogonality for the components and/or the loadings. We propose s-CA, a sparse variant of correspondence analysis (CA) for large contingency tables like documents-terms matrices used in textmining, together with pPMD, a projected deflation technique already used in s-PCA. Since CA is a double weighted PCA (for rows and columns) or a weighted SVD, we apply s-SVD in order to sparsify both rows and columns weights. The user may tune the level of sparsity of rows and columns and optimize it according to some criterium, and even decide that no sparsity is needed for rows (or columns) by relaxing one sparsity constraint.
Domains
Statistics [stat]Origin | Files produced by the author(s) |
---|