Shrihari Vasudevan, Moninder Singh, et al.
ICDMW 2017
This paper demonstrates a novel approach to training deep neural networks using a Mutual Information (MI)-driven, decaying Learning Rate (LR), Stochastic Gradient Descent (SGD) algorithm. MI between the output of the neural network and true outcomes is used to adaptively set the LR for the network, in every epoch of the training cycle. This idea is extended to layer-wise setting of LR, as MI naturally provides a layer-wise performance metric. A LR range test determining the operating LR range is also proposed. Experiments compared this approach with popular alternatives such as gradient-based adaptive LR algorithms like Adam, RMSprop, and LARS. Competitive to better accuracy outcomes obtained in competitive to better time, demonstrate the feasibility of the metric and approach.
Shrihari Vasudevan, Moninder Singh, et al.
ICDMW 2017
Melanie E. Roberts, Shrihari Vasudevan
TAAI 2015
Shrihari Vasudevan, Moninder Singh, et al.
Data Science and Engineering
Moninder Singh, Karthikeyan Natesan Ramamurthy, et al.
GlobalSIP 2017