Taming Convergence for Asynchronous Stochastic Gradient Descent with Unbounded Delay in Non-Convex Learning

Abstract

Understanding the convergence performance of asynchronous stochastic gradient descent method (Async-SGD) has received increasing attention in recent years due to their foundational role in machine learning. To date, however, most of the existing works are restricted to either bounded gradient delays or convex settings. In this paper, we focus on Async-SGD and its variant Async-SGDI (which uses increasing batch size) for non-convex optimization problems with unbounded gradient delays. We prove o(1/ k) convergence rate for Async-SGD and o(1/k) for Async-SGDI. Also, a unifying sufficient assumption for Async-SGD’s convergence is proposed, which includes two major gradient delay models in the literature as special cases.

Publication
In 59th IEEE Conference on Decision and Control (2020)
Xin Zhang
Xin Zhang
Research Scientist

Hi there, welcome to my page!