Abstract:
This BTP explores the integration of Online Federated Learning (OFL) with Distributed Online Linear Regression (DOLR) in the context of the FedOMD algorithm, addressing the challenges of regret minimization and accurate model initialization. The work is grounded in key assumptions related to network properties, and it introduces the DOLR algorithm, a distributed version of Online Gradient Descent, with regret guarantees under specific conditions. The thesis then details the connection between OFL and DOLR in FedOMD, emphasizing novel strategies for initialization, local updates, and server aggregation. The impact of fine-tuning learning rates and parameters is examined, highlighting their pivotal role in achieving convergence and model fidelity. The assessment section outlines metrics such as regret, model fidelity, convergence analysis, and computational efficiency, guiding iterative refinements. Future work is proposed, exploring advanced model initialization techniques, dynamic learning rate adaptation, robustness to client heterogeneity, scalability optimization, and real-world application scenarios. Overall, this research contributes to the advancement of federated learning methodologies, providing a comprehensive framework and paving the way for future enhancements and applications in diverse domains.