Consistency in Models for Distributed Learning under Communication Constraints release_3resu6an6zhprkbx4d2njlgezq

by Joel B. Predd, Sanjeev R. Kulkarni, H. Vincent Poor

Released as a article .

2005  

Abstract

Motivated by sensor networks and other distributed settings, several models for distributed learning are presented. The models differ from classical works in statistical pattern recognition by allocating observations of an independent and identically distributed (i.i.d.) sampling process amongst members of a network of simple learning agents. The agents are limited in their ability to communicate to a central fusion center and thus, the amount of information available for use in classification or regression is constrained. For several basic communication models in both the binary classification and regression frameworks, we question the existence of agent decision rules and fusion rules that result in a universally consistent ensemble. The answers to this question present new issues to consider with regard to universal consistency. Insofar as these models present a useful picture of distributed scenarios, this paper addresses the issue of whether or not the guarantees provided by Stone's Theorem in centralized environments hold in distributed settings.
In text/plain format

Archived Files and Locations

application/pdf  214.7 kB
file_oyjrztmfqrd2tdx6f7ryijl4pu
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2005-03-26
Version   v1
Language   en ?
arXiv  cs/0503071v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 92d8b31b-4aa1-40e6-96d5-53e06c2af24c
API URL: JSON