InferSpark: Statistical Inference at Scale release_poq7xqsrozg4baornnvzqrsmpq

by Zhuoyue Zhao, Jialing Pei, Eric Lo, Kenny Q. Zhu, Chris Liu

Released as a article .

2017  

Abstract

The Apache Spark stack has enabled fast large-scale data processing. Despite a rich library of statistical models and inference algorithms, it does not give domain users the ability to develop their own models. The emergence of probabilistic programming languages has showed the promise of developing sophisticated probabilistic models in a succinct and programmatic way. These frameworks have the potential of automatically generating inference algorithms for the user defined models and answering various statistical queries about the model. It is a perfect time to unite these two great directions to produce a programmable big data analysis framework. We thus propose, InferSpark, a probabilistic programming framework on top of Apache Spark. Efficient statistical inference can be easily implemented on this framework and inference process can leverage the distributed main memory processing power of Spark. This framework makes statistical inference on big data possible and speed up the penetration of probabilistic programming into the data engineering domain.
In text/plain format

Archived Files and Locations

application/pdf  428.3 kB
file_fhgr3xdicvhbpjmmwarlv5smri
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2017-07-07
Version   v1
Language   en ?
arXiv  1707.02047v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 225cb97d-b6a2-45f4-9eaa-6c25c11413d1
API URL: JSON