Now published in OJ-CSYS: "Novel Bounds for Incremental Hessian Estimation With Application to Zeroth-Order Federated Learning," by Alessio Maritan, Luca Schenato and Subhrakanti Dey. Link: https://lnkd.in/gN27SuPH The Hessian matrix conveys important information about the curvature, spectrum and partial derivatives of a function, and is required in a variety of tasks. However, computing the exact Hessian is prohibitively expensive for high-dimensional input spaces, and is just impossible in zeroth-order optimization, where the objective function is a black-box of which only input-output pairs are known. In this work we address this relevant problem by providing a rigorous analysis of an Hessian estimator available in the literature, allowing it to be used as a provably accurate replacement of the true Hessian matrix. The Hessian estimator is randomized and incremental, and its computation requires only point function evaluations. We provide non-asymptotic convergence bounds on the estimation error and derive the minimum number of function queries needed to achieve a desired accuracy with arbitrarily high probability. In the second part of the paper we show a practical application of our results, introducing a novel optimization algorithm suitable for non-convex and black-box federated learning. #optimization #dataprivacy #federatedlearning
IEEE Open Journal of Control Systems’ Post
More Relevant Posts
-
Adaptive Federated Learning with Auto-Tuned Clients Junhyung Lyle Kim, Mohammad Taha Toghani, César A. Uribe, Anastasios Kyrillidis Abstract Federated learning (FL) is a distributed machine learning framework where the global model of a central server is trained via multiple collaborative steps by participating clients without sharing their data. While being a flexible framework, where the distribution of local data, participation rate, and computing power of each client can greatly vary, such flexibility gives rise to many new challenges, especially in the hyperparameter tuning on the client side. We propose Δ-SGD, a simple step size rule for SGD that enables each client to use its own step size by adapting to the local smoothness of the function each client is optimizing. We provide theoretical and empirical results where the benefit of the client adaptivity is shown in various FL scenarios. 👉 https://lnkd.in/dgKB_hiR #machinelearning
To view or add a comment, sign in
-
-
A computer researcher and young generation trainer for the world. Welcome to have collaboration for publishing papers
How we can enhance Federated Learning? One solution is that we can use have a Personalized Federated Learning Method with Two Classifiers which has been described in our paper at: https://lnkd.in/gYFMg3yb
To view or add a comment, sign in
-
IIT Roorkee | Data Science | AI/ML | Generative AI | NLP | Conversational AI | Core member @ACM IITR
[2/8]🔎 Here's my latest PDF on "RAG Simplified: How It Works and Its Implementation"! In this comprehensive theoretical guide , I break down the intricacies of Retrieval-Augmented Generation (RAG), a cutting-edge approach that combines the power of information retrieval with generative models. Covered the key aspects about Vector databases and its retrieval. #GenerativeAI #MachineLearning #AIResearch #DeepLearning #ArtificialIntelligence #AIML #RAG #RAGArchitecture #TechInnovation #DataScience
To view or add a comment, sign in
-
Here are the key takeaways from our recent Research Paper Reading Session on the topic "Privacy preserving techniques in machine learning" discussed by Labani Halder https://lnkd.in/d3bRczEH
🔍📊 Excited to share the key takeaways from the Research Paper Reading Session on the topic 'Privacy preserving techniques in machine learning' by Labani Halder. Labani introduced a groundbreaking method of data obfuscation using the Hilbert Curve 🌀. Here's what we learned: 1) Centralized vs. Distributed Learning: We explored the contrast between the two architectures. While centralized learning sees data and models co-located, distributed learning facilitates model parameter sharing across devices. 2)Types of attacks: Membership Inference, Reconstruction, Property Inference, Model extraction. 3)Hilbert Curve Encodings: An innovative approach to data obfuscation was introduced, leveraging Hilbert Curve (HC) encodings to obscure proximity information in 2-dimensional spaces. This technique underscores the importance of advanced distance functions for accurate data comparison in obfuscated environments. 4) Distance Function between points on a 2-D Hilbert curve: The session underscored the necessity of a sophisticated distance function capable of accurately assessing the dissimilarity of obfuscated points in 2-dimensional spaces. By demonstrating the unique capabilities of Hilbert Curve (HC) encodings to obscure proximity information, Labani emphasized on the need of developing specialized approaches that can protect data from adversary attacks. Join our upcoming paper reading sessions for deep dives into cutting-edge research, insightful discussions, and expanding your knowledge horizons. Engage with experts, explore groundbreaking methodologies, and be at the forefront of innovation. 🌐🔬 Stay tuned for more updates on our next session! 📅📣 #researchpaperreadingsession #hilbertcurve #innovation #research #machine learning
To view or add a comment, sign in
-
🔍📊 Excited to share the key takeaways from the Research Paper Reading Session on the topic 'Privacy preserving techniques in machine learning' by Labani Halder. Labani introduced a groundbreaking method of data obfuscation using the Hilbert Curve 🌀. Here's what we learned: 1) Centralized vs. Distributed Learning: We explored the contrast between the two architectures. While centralized learning sees data and models co-located, distributed learning facilitates model parameter sharing across devices. 2)Types of attacks: Membership Inference, Reconstruction, Property Inference, Model extraction. 3)Hilbert Curve Encodings: An innovative approach to data obfuscation was introduced, leveraging Hilbert Curve (HC) encodings to obscure proximity information in 2-dimensional spaces. This technique underscores the importance of advanced distance functions for accurate data comparison in obfuscated environments. 4) Distance Function between points on a 2-D Hilbert curve: The session underscored the necessity of a sophisticated distance function capable of accurately assessing the dissimilarity of obfuscated points in 2-dimensional spaces. By demonstrating the unique capabilities of Hilbert Curve (HC) encodings to obscure proximity information, Labani emphasized on the need of developing specialized approaches that can protect data from adversary attacks. Join our upcoming paper reading sessions for deep dives into cutting-edge research, insightful discussions, and expanding your knowledge horizons. Engage with experts, explore groundbreaking methodologies, and be at the forefront of innovation. 🌐🔬 Stay tuned for more updates on our next session! 📅📣 #researchpaperreadingsession #hilbertcurve #innovation #research #machine learning
To view or add a comment, sign in
-
enabling digital services for Student Loan related activities while maintaining the highest security standard, the most compliant personal data protection and customer-centric data-driven innovation.
📢 Excited to share our latest blog post on enhancing clustering representations with positive proximity and cluster dispersion learning! 🚀 Contemporary deep clustering techniques often use contrastive or non-contrastive methods to acquire effective representations for clustering tasks. However, both approaches have their limitations. 🔍 On one hand, contrastive methods can introduce class collision issues, potentially compromising clustering performance. 🚫 On the other hand, non-contrastive techniques prevent class collisions but may produce non-uniform representations that lead to clustering collapse. 🌀 In our new work, we present PIPCDR, a novel end-to-end deep clustering approach that leverages the strengths of both methods while mitigating their drawbacks. 💡 PIPCDR incorporates a positive instance proximity loss, ensuring alignment between augmented views of instances and their sampled neighbors. This enhances within-cluster compactness by selecting genuinely positive pairs within the embedding space. 📏 Additionally, PIPCDR features a cluster dispersion regularizer that promotes uniformity in the learned representations by maximizing inter-cluster distances and minimizing within-cluster compactness. 🏆 The benefits of PIPCDR include producing well-separated clusters, generating uniform representations, avoiding class collision issues, and enhancing within-cluster compactness. We extensively evaluated PIPCDR within an end-to-end Majorize-Minimization framework, demonstrating its competitive performance on moderate-scale clustering benchmark datasets and achieving new state-of-the-art results on large-scale datasets. 📈 Read our blog post to learn more about PIPCDR and its impact on enhancing clustering representations: [Link to Blog Post](https://bit.ly/3Sj8qEy) Don't miss out on this cutting-edge solution for boosting clustering performance! 💪💼 #deeplearning #clustering #datascience #research #machinelearning
To view or add a comment, sign in
-
enabling digital services for Student Loan related activities while maintaining the highest security standard, the most compliant personal data protection and customer-centric data-driven innovation.
📢 Check out our latest blog post "Mitigating federated learning contribution allocation instability through randomized aggregation" on arXiv:2405.08044v1. This paper delves into the critical issue of fair and accurate attribution of contributions in federated learning, presenting a novel approach called FedRandom. Learn how this method not only enhances the overall fairness and stability of the federated learning system, but also significantly improves the accuracy of contribution assessment. Visit the link to dive into the full details: https://bit.ly/3UKSMkX 🚀 #FederatedLearning #AIResearch #DataPrivacy #MachineLearning
To view or add a comment, sign in
-
It’s a Wrap First time attending #ADIPEC, impression: “This is huge, and I got lost”. Second publication for ADIPEC Exhibition and Conference after the first one on 2019 talked about literature review on Edge Architecture presented by my colleague/co-author Tony Jovin, this time I presented by myself topic around Deep Learning implementation for ROP prediction. Attended few technical presentations around machine learning implementations in various aspects of drilling. See you next year inshaallah with more interesting topics and hopefully able to submit several publications. Links on papers: 1. Literature review on Edge Analytics architecture: https://lnkd.in/dkRkRNT7 2. Comparison of ROP prediction with Deep Learning approaches: https://lnkd.in/dVrJvzNR
To view or add a comment, sign in
-
-
Associate Professor of Computer Science at National University of Singapore, Director of AI Research at AI Singapore, Deputy Director at NUS AI Institute
Fresh off the oven! Our research group and collaborators have put together 4 chapters in the Federated Learning: Theory and Practice book (https://lnkd.in/guuA57eR): fairness (chapter 8), data valuation (chapter 15), and incentives (chapter 16) in #FederatedLearning, and federated sequential decision making (chapter 14). They serve as good introductory readings in the above-mentioned topics on #FederatedLearning. More recent works on these topics can be found on our research group webpage (https://lnkd.in/gmP5bMBN). A shout-out to Xiaoqiang Lin, Xinyi Xu, Zhaoxuan Wu, Rachael Sim, See-Kiong Ng, Chuan Sheng Foo, Patrick Jaillet, Nghia Hoang, Zhongxiang Dai, Flint F., Cheston Tan, Yao Shu, Lucas Agussurja, Sebastian Tay, 张叶红, Bryan Kian Hsiang Low who have contributed to these chapters and not forgetting the editors Lam M. Nguyen, Nghia Hoang, Pin-Yu Chen for making it happen!
Federated Learning
sciencedirect.com
To view or add a comment, sign in