An Efficient and Scalable Approach to Build Co-occurrence Matrix for DNN's Embedding Layer
Résumé
Embedding is a crucial step for deep neural networks. Datasets, from different applications, with different structures, can all be processed through an embedding layer and transformed into a dense matrix. The transformation must minimize both the loss of information and the redundancy of data. Extracting appropriate data features ensures the efficiency of the transformation. The co-occurrence matrix is an excellent way of representing the links between elements in a dataset. However, the dataset size becomes a problem in terms of computation power and memory footprint for using the cooccurrence matrix.
In this paper, we propose a parallel and distributed approach to efficiently constructing the co-occurrence matrix in a scalable way. Our solution takes advantage of different features of boolean datasets to minimize the construction time of the co-occurrence matrix. Our experimental results show that our solution outperforms traditional approaches up to 34x. We also demonstrate the efficacy of our approach with a cost model.
Mots clés
Computing methodologies → Massively parallel algorithms Natural language processing Information extraction Control methods Deep Learning
Distributed computing
Embedding
Sparse matrix
Co-occurrence matrix
Large-scale data analytics
Computing methodologies → Massively parallel algorithms
Natural language processing
Information extraction
Control methods Deep Learning
licence |
---|