Journal of Data and Information Science ›› 2020, Vol. 5 ›› Issue (2): 13-32.doi: 10.2478/jdis-2020-0010

• Research Papers • Previous Articles     Next Articles

Multi-Aspect Incremental Tensor Decomposition Based on Distributed In-Memory Big Data Systems

Hye-Kyung Yang1, Hwan-Seung Yong2,()   

  1. 1Department of Computer Software, Korean Bible University, Seoul, 01757, Republic of Korea
    2Department of Computer Science and Engineering, Ewha Womans University, Seoul, 03760, Republic of Korea
  • Received:2019-10-30 Revised:2020-02-22 Accepted:2020-03-06 Online:2020-05-20 Published:2020-05-24
  • Contact: Hwan-Seung Yong


Purpose: We propose InParTen2, a multi-aspect parallel factor analysis three-dimensional tensor decomposition algorithm based on the Apache Spark framework. The proposed method reduces re-decomposition cost and can handle large tensors.

Design/methodology/approach: Considering that tensor addition increases the size of a given tensor along all axes, the proposed method decomposes incoming tensors using existing decomposition results without generating sub-tensors. Additionally, InParTen2 avoids the calculation of Khari-Rao products and minimizes shuffling by using the Apache Spark platform.

Findings: The performance of InParTen2 is evaluated by comparing its execution time and accuracy with those of existing distributed tensor decomposition methods on various datasets. The results confirm that InParTen2 can process large tensors and reduce the re-calculation cost of tensor decomposition. Consequently, the proposed method is faster than existing tensor decomposition algorithms and can significantly reduce re-decomposition cost.

Research limitations: There are several Hadoop-based distributed tensor decomposition algorithms as well as MATLAB-based decomposition methods. However, the former require longer iteration time, and therefore their execution time cannot be compared with that of Spark-based algorithms, whereas the latter run on a single machine, thus limiting their ability to handle large data.

Practical implications: The proposed algorithm can reduce re-decomposition cost when tensors are added to a given tensor by decomposing them based on existing decomposition results without re-decomposing the entire tensor.

Originality/value: The proposed method can handle large tensors and is fast within the limited-memory framework of Apache Spark. Moreover, InParTen2 can handle static as well as incremental tensor decomposition.

Key words: PARAFAC, Tensor decomposition, Incremental tensor decomposition, Apache Spark, Big data