We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo ScienceWISE logo

Computer Science > Machine Learning

Title: Stochastic Polyak Stepsize with a Moving Target

Abstract: We propose a new stochastic gradient method called MOTAPS (Moving Targetted Polyak Stepsize) that uses recorded past loss values to compute adaptive stepsizes. MOTAPS can be seen as a variant of the Stochastic Polyak (SP) which is also a method that also uses loss values to adjust the stepsize. The downside to the SP method is that it only converges when the interpolation condition holds. MOTAPS is an extension of SP that does not rely on the interpolation condition. The MOTAPS method uses $n$ auxiliary variables, one for each data point, that track the loss value for each data point. We provide a global convergence theory for SP, an intermediary method TAPS, and MOTAPS by showing that they all can be interpreted as a special variant of online SGD. We also perform several numerical experiments on convex learning problems, and deep learning models for image classification and language translation. In all of our tasks we show that MOTAPS is competitive with the relevant baseline method.
Comments: 49 pages, 13 figures, 1 table
Subjects: Machine Learning (cs.LG); Optimization and Control (math.OC)
MSC classes: 90C53, 74S60, 90C06, 62L20, 68W20, 15B52, 65Y20, 68W40
ACM classes: G.1.6
Cite as: arXiv:2106.11851 [cs.LG]
  (or arXiv:2106.11851v2 [cs.LG] for this version)

Submission history

From: Robert M. Gower [view email]
[v1] Tue, 22 Jun 2021 15:11:18 GMT (10441kb,D)
[v2] Thu, 23 Sep 2021 19:39:54 GMT (10451kb,D)

Link back to: arXiv, form interface, contact.