IIIT-Delhi Institutional Repository

Design & optimizing distributed learning gradients using control theory

Show simple item record

dc.contributor.author Mehrotra, Sparsh
dc.contributor.author Roy, Sayan Basu (Advisor)
dc.date.accessioned 2024-05-20T09:32:33Z
dc.date.available 2024-05-20T09:32:33Z
dc.date.issued 2023-12
dc.identifier.uri http://repository.iiitd.edu.in/xmlui/handle/123456789/1536
dc.description.abstract We use different optimizers in everyday Machine Learning and Deep Learning applications. The task for any machine learning algorithm is to 𝑚𝑖𝑛 𝑓 (𝑥), where 𝑓 is the objective function and 𝑥 is the input parameter. We can use standard algorithms like gradient descent for simple convex functions. Nowadays, more complex state-of-the-art optimizers like ADAM, ADAMSSD, and DADAM are used. Recent advancements have tackled the situation of finding the minimum for possibly non-convex settings. Recent state-of-the-art optimizers aim to solve the minimization problem for online and distributed settings as well. The aim is to develop an optimizer for distributed / online settings using the control theory analysis. Recent work in control theory doesn’t explore the idea of distributed and online settings. en_US
dc.language.iso en_US en_US
dc.publisher IIIT-Delhi en_US
dc.subject AdamSSD en_US
dc.subject DADAM en_US
dc.subject G-AdaGrad en_US
dc.title Design & optimizing distributed learning gradients using control theory en_US
dc.type Other en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Repository


Advanced Search

Browse

My Account