No Time to Observe: Adaptive Influence Maximization with Partial Feedback release_7ntdpldvsffizjkatwp2shze7u

by Jing Yuan, Shaojie Tang

Released as a article .

2019  

Abstract

Although influence maximization problem has been extensively studied over the past ten years, majority of existing work adopt one of the following models: full-feedback model or zero-feedback model. In the zero-feedback model, we have to commit the seed users all at once in advance, this strategy is also known as non-adaptive policy. In the full-feedback model, we select one seed at a time and wait until the diffusion completes, before selecting the next seed. Full-feedback model has better performance but potentially huge delay, zero-feedback model has zero delay but poorer performance since it does not utilize the observation that may be made during the seeding process. To fill the gap between these two models, we propose Partial-feedback Model, which allows us to select a seed at any intermediate stage. We develop two novel greedy policies that, for the first time, achieve bounded approximation ratios under both uniform and non-uniform cost settings.
In text/plain format

Archived Files and Locations

application/pdf  532.2 kB
file_ydy2pc6qbvbsnd5qhd7qotnhdi
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2019-04-17
Version   v5
Language   en ?
arXiv  1609.00427v5
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: d0eb69b2-362c-4137-9e82-22dd3eae7f19
API URL: JSON