Electromagnetic Source Imaging via a Data-Synthesis-Based Denoising Autoencoder
release_c2eab4nzynezloswr3rym4np6a
by
Gexin Huang, Zhu Liang Yu, Wei Wu, Ke Liu, Zheng Hui Gu, Feifei Qi, YuanQing Li, Jiawen Liang
2020
Abstract
Electromagnetic source imaging (ESI) is a highly ill-posed inverse problem.
To find a unique solution, traditional ESI methods impose a variety of priors
that may not reflect the actual source properties. Such limitations of
traditional ESI methods hinder their further applications. Inspired by deep
learning approaches, a novel data-synthesized spatio-temporal denoising
autoencoder method (DST-DAE) method was proposed to solve the ESI inverse
problem. Unlike the traditional methods, we utilize a neural network to
directly seek generalized mapping from the measured E/MEG signals to the
cortical sources. A novel data synthesis strategy is employed by introducing
the prior information of sources to the generated large-scale samples using the
forward model of ESI. All the generated data are used to drive the neural
network to automatically learn inverse mapping. To achieve better estimation
performance, a denoising autoencoder (DAE) architecture with spatio-temporal
feature extraction blocks is designed. Compared with the traditional methods,
we show (1) that the novel deep learning approach provides an effective and
easy-to-apply way to solve the ESI problem, that (2) compared to traditional
methods, DST-DAE with the data synthesis strategy can better consider the
characteristics of real sources than the mathematical formulation of prior
assumptions, and that (3) the specifically designed architecture of DAE can not
only provide a better estimation of source signals but also be robust to noise
pollution. Extensive numerical experiments show that the proposed method is
superior to the traditional knowledge-driven ESI methods.
In text/plain
format
Archived Files and Locations
application/pdf 7.6 MB
file_i6x4v66wqreezonxg542eceeua
|
arxiv.org (repository) web.archive.org (webarchive) |
2010.12876v1
access all versions, variants, and formats of this works (eg, pre-prints)