MLCTR: A Fast Scalable Coupled Tensor Completion Based on Multi-Layer Non-Linear Matrix Factorization
release_phyo4vfe3rh75hdjpvjsfjvr54
by
Ajim Uddin, Dan Zhou, Xinyuan Tao, Chia-Ching Chou, Dantong Yu
2021
Abstract
Firms earning prediction plays a vital role in investment decisions,
dividends expectation, and share price. It often involves multiple
tensor-compatible datasets with non-linear multi-way relationships,
spatiotemporal structures, and different levels of sparsity. Current non-linear
tensor completion algorithms tend to learn noisy embedding and incur
overfitting. This paper focuses on the embedding learning aspect of the tensor
completion problem and proposes a new multi-layer neural network architecture
for tensor factorization and completion (MLCTR). The network architecture
entails multiple advantages: a series of low-rank matrix factorizations (MF)
building blocks to minimize overfitting, interleaved transfer functions in each
layer for non-linearity, and by-pass connections to reduce the gradient
diminishing problem and increase the depths of neural networks. Furthermore,
the model employs Stochastic Gradient Descent(SGD) based optimization for fast
convergence in training. Our algorithm is highly efficient for imputing missing
values in the EPS data. Experiments confirm that our strategy of incorporating
non-linearity in factor matrices demonstrates impressive performance in
embedding learning and end-to-end tensor models, and outperforms approaches
with non-linearity in the phase of reconstructing tensors from factor matrices.
In text/plain
format
Archived Files and Locations
application/pdf 5.1 MB
file_2rkkbmtyvbhs5fskg3lv5r3sna
|
arxiv.org (repository) web.archive.org (webarchive) |
2109.01773v1
access all versions, variants, and formats of this works (eg, pre-prints)