A Sparse Deep Transfer Learning Model and Its Application for Smart Agriculture release_wjowimah2zhmbnavz5vj6racny

by Zhikui Chen, Xu Zhang, Shi Chen, Fangming Zhong

Published in Wireless Communications and Mobile Computing by Hindawi Limited.

2021   Volume 2021, p1-11

Abstract

The introduction of deep transfer learning (DTL) further reduces the requirement of data and expert knowledge in various uses of applications, helping DNN-based models effectively reuse information. However, it often transfers all parameters from the source network that might be useful to the task. The redundant trainable parameters restrict DTL in low-computing-power devices and edge computing, while small effective networks with fewer parameters have difficulty transferring knowledge due to structural differences in design. For the challenge of how to transfer a simplified model from a complex network, in this paper, an algorithm is proposed to realize a sparse DTL, which only transfers and retains the most necessary structure to reduce the parameters of the final model. Sparse transfer hypothesis is introduced, in which a compressing strategy is designed to construct deep sparse networks that distill useful information in the auxiliary domain, improving the transfer efficiency. The proposed method is evaluated on representative datasets and applied for smart agriculture to train deep identification models that can effectively detect new pests using few data samples.
In application/xml+jats format

Archived Files and Locations

application/pdf  1.3 MB
file_2cyufwth4newdagfa7irucycda
downloads.hindawi.com (publisher)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article-journal
Stage   published
Date   2021-06-22
Language   en ?
Container Metadata
Open Access Publication
In DOAJ
In Keepers Registry
ISSN-L:  1530-8669
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: da1227e5-fbd2-4a77-a29b-42b9d3b42b96
API URL: JSON