CrescendoNet: A Simple Deep Convolutional Neural Network with Ensemble Behavior release_4v2axciqzngr7m22j65lmswwtu

by Xiang Zhang, Nishant Vishwamitra, Hongxin Hu, Feng Luo

Released as a article .

2018  

Abstract

We introduce a new deep convolutional neural network, CrescendoNet, by stacking simple building blocks without residual connections. Each Crescendo block contains independent convolution paths with increased depths. The numbers of convolution layers and parameters are only increased linearly in Crescendo blocks. In experiments, CrescendoNet with only 15 layers outperforms almost all networks without residual connections on benchmark datasets, CIFAR10, CIFAR100, and SVHN. Given sufficient amount of data as in SVHN dataset, CrescendoNet with 15 layers and 4.1M parameters can match the performance of DenseNet-BC with 250 layers and 15.3M parameters. CrescendoNet provides a new way to construct high performance deep convolutional neural networks without residual connections. Moreover, through investigating the behavior and performance of subnetworks in CrescendoNet, we note that the high performance of CrescendoNet may come from its implicit ensemble behavior, which differs from the FractalNet that is also a deep convolutional neural network without residual connections. Furthermore, the independence between paths in CrescendoNet allows us to introduce a new path-wise training procedure, which can reduce the memory needed for training.
In text/plain format

Archived Files and Locations

application/pdf  971.6 kB
file_ox5attpfo5fvdoukyyfghg4oc4
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2018-01-04
Version   v2
Language   en ?
arXiv  1710.11176v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: ec287428-4751-4ff6-aee9-1cd97ac0d23a
API URL: JSON