top of page

 > 最新消息 > 學術活動 > Distributed Adaptive Gradient Methods for Online Optimization

【學術活動】Distributed Adaptive Gradient Methods for Online Optimization

日曆.png

2023-10-03

夾.png

112年10月11日(星期三)舉辦專題演講,

歡迎大家踴躍參加,相關資訊如下。

演講題目:Distributed Adaptive Gradient Methods for Online Optimization

 

演講講者:Professor George Michailidis   (   Department of Statistics and Data Science   University of California, Los Angeles (UCLA)   )

 

演講時間:2023/10/11 (三) 14:20-16:00

 

演講地點:臺灣大學國青大樓101會議室

 

Abstract:

Adaptive gradient based optimization methods (Adam, Adagrad, RMSProp) are widely used in solving large scale machine learning problems including training deep learning neural networks. A number of schemes have been proposed in the literature aiming at parallelizing them, based on communications of peripheral nodes with a central node, or amongst themselves. In this presentation, we briefly review centralized adaptive gradient based algorithms and then introduce and discuss their distributed variants. We present their convergence properties in both stochastic and deterministic settings. The algorithms are illustrated on applications, including training of deep neural networks.

presentation 20231011-1.jpg
bottom of page