LALR: Theoretical and Experimental validation of Lipschitz Adaptive Learning Rate in Regression and Neural Networks release_mml6g6yf7fh4xi22q2izhhj7ny

by Snehanshu Saha, Tejas Prashanth, Suraj Aralihalli, Sumedh Basarkod, T.S.B Sudarshan, Soma S Dhavala

Released as a article .

2020  

Abstract

We propose a theoretical framework for an adaptive learning rate policy for the Mean Absolute Error loss function and Quantile loss function and evaluate its effectiveness for regression tasks. The framework is based on the theory of Lipschitz continuity, specifically utilizing the relationship between learning rate and Lipschitz constant of the loss function. Based on experimentation, we have found that the adaptive learning rate policy enables up to 20x faster convergence compared to a constant learning rate policy.
In text/plain format

Archived Files and Locations

application/pdf  534.6 kB
file_7vyb3rglizau5lyiwnxn56kcsa
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-05-19
Version   v1
Language   en ?
arXiv  2006.13307v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 536e0ace-ace5-42e9-8027-965a65af3b28
API URL: JSON