Publications
- Accelerated Distributed Stochastic Non-Convex Optimization over Time-Varying Directed Networks
- Abstract: We study non-convex optimization problems where the data is distributed across nodes of a time-varying directed network; this describes dynamic settings in which the communication between network nodes is affected by delays or link failures. The network nodes, which can access only their local objectives and query a stochastic first-order oracle for the gradient estimates, collaborate by exchanging messages with their neighbors to minimize a global objective function. We propose an algorithm for non-convex optimization problems in such settings that leverages stochastic gradient descent with momentum and gradient tracking. We further prove, by analyzing dynamic network systems with gradient acceleration, that the oracle complexity of the proposed algorithm is O(1/ε1.5). The results demonstrate superior performance of the proposed framework compared to state-of-the-art related methods used in a variety of machine learning tasks.
- Y. Chen, A. Hashemi and H. Vikalo, “Accelerated Distributed Stochastic Non-Convex Optimization over Time-Varying Directed Networks,” ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 2023, pp. 1-5, doi: 10.1109/ICASSP49357.2023.10094584.
- Communication-Efficient Variance-Reduced Decentralized Stochastic Optimization Over Time-Varying Directed Graphs
- Abstract: In this article, we consider the problem of decentralized optimization over time-varying directed networks. The network nodes can access only their local objectives, and aim to collaboratively minimize a global function by exchanging messages with their neighbors. Leveraging sparsification, gradient tracking, and variance reduction, we propose a novel communication-efficient decentralized optimization scheme that is suitable for resource-constrained time-varying directed networks. We prove that in the case of smooth and strongly convex objective functions, the proposed scheme achieves an accelerated linear convergence rate. To our knowledge, this is the first decentralized optimization framework for time-varying directed networks that achieves such a convergence rate and applies to settings requiring sparsified communication. Experimental results on both synthetic and real datasets verify the theoretical results and demonstrate the efficacy of the proposed scheme.
- Y. Chen, A. Hashemi and H. Vikalo, “Communication-Efficient Variance-Reduced Decentralized Stochastic Optimization Over Time-Varying Directed Graphs,” in IEEE Transactions on Automatic Control, vol. 67, no. 12, pp. 6583-6594, Dec. 2022, doi: 10.1109/TAC.2021.3133372.
- Decentralized optimization on time-varying directed graphs under communication constraints
- Abstract: We consider the problem of decentralized optimization where a collection of agents, each having access to a local cost function, communicate over a time-varying directed network and aim to minimize the sum of those functions. In practice, the amount of information that can be exchanged between the agents is limited due to communication constraints. We propose a communication-efficient algorithm for decentralized convex optimization that rely on sparsification of local updates exchanged between neighboring agents in the network. In directed networks, message sparsification alters column-stochasticity – a property that plays an important role in establishing convergence of decentralized learning tasks. We propose a decentralized optimization scheme that relies on local modification of mixing matrices, and show that it achieves O(lnT/T√) convergence rate in the considered settings. Experiments validate theoretical results and demonstrate efficacy of the proposed algorithm.
- Y. Chen, A. Hashemi and H. Vikalo, “Decentralized Optimization on Time-Varying Directed Graphs Under Communication Constraints,” ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 2021, pp. 3670-3674, doi: 10.1109/ICASSP39728.2021.9415052.
- Federated Learning with Infrastructure Resource Limitations in Vehicular Object Detection
- Abstract: Object detection plays an essential role in many vehicular applications such as Advanced Driver Assistance System(ADAS), Dynamic Map, and Obstacle Detection. However, object detection under the traditional centralized machine learning framework, where images transmission utilization of infrastructure resources and privacy concerns about sensitive image content leakage. We introduce Federated Learning, a practical framework that enables machine learning to be conducted in a distributed manner and potentially addresses the traditional centralized machine learning issues by avoiding raw data transmission. However, Federated Learning distributes the pieces of training to the client, which relies on client communication in Vehicular Networks heavily, and not all the clients have the same resources in the real world. Therefore, we study communication and client resource limitation issues where clients have different amounts of local images and compute resources in the Vehicular Federated Learning framework, propose an algorithm to deal with these issues, and design the experiments to prove it. The experimental results show the efficacy of the proposed algorithm, which maintains the object detection precision while improving the 66% training time and reducing 35% communication cost.
- Y. Chen, C. Wang and B. Kim, “Federated Learning with Infrastructure Resource Limitations in Vehicular Object Detection,” 2021 IEEE/ACM Symposium on Edge Computing (SEC), San Jose, CA, USA, 2021, pp. 366-370, doi: 10.1145/3453142.3491412.