default search action
Aryan Mokhtari
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j26]Qiujiang Jin, Tongzheng Ren, Nhat Ho, Aryan Mokhtari:
Statistical and Computational Complexities of BFGS Quasi-Newton Method for Generalized Linear Models. Trans. Mach. Learn. Res. 2024 (2024) - [c76]Ruichen Jiang, Parameswaran Raman, Shoham Sabach, Aryan Mokhtari, Mingyi Hong, Volkan Cevher:
Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate. AISTATS 2024: 4411-4419 - [c75]Liam Collins, Hamed Hassani, Mahdi Soltanolkotabi, Aryan Mokhtari, Sanjay Shakkottai:
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks. ICML 2024 - [i71]Ruichen Jiang, Parameswaran Raman, Shoham Sabach, Aryan Mokhtari, Mingyi Hong, Volkan Cevher:
Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate. CoRR abs/2401.03058 (2024) - [i70]Jincheng Cao, Ruichen Jiang, Erfan Yazdandoost Hamedani, Aryan Mokhtari:
An Accelerated Gradient Method for Simple Bilevel Optimization with Convex Lower-level Problem. CoRR abs/2402.08097 (2024) - [i69]Liam Collins, Advait Parulekar, Aryan Mokhtari, Sujay Sanghavi, Sanjay Shakkottai:
In-Context Learning with Transformers: Softmax Attention Adapts to Function Lipschitzness. CoRR abs/2402.11639 (2024) - [i68]Ruichen Jiang, Michal Derezinski, Aryan Mokhtari:
Stochastic Newton Proximal Extragradient Method. CoRR abs/2406.01478 (2024) - [i67]Ruichen Jiang, Ali Kavis, Qiujiang Jin, Sujay Sanghavi, Aryan Mokhtari:
Adaptive and Optimal Second-order Optimistic Methods for Minimax Optimization. CoRR abs/2406.02016 (2024) - [i66]Devyani Maladkar, Ruichen Jiang, Aryan Mokhtari:
Convergence Analysis of Adaptive Gradient Methods under Refined Smoothness and Noise Assumptions. CoRR abs/2406.04592 (2024) - [i65]Ruichen Jiang, Aryan Mokhtari:
Online Learning Guided Quasi-Newton Methods with Global Non-Asymptotic Convergence. CoRR abs/2410.02626 (2024) - [i64]Bingcong Li, Liang Zhang, Aryan Mokhtari, Niao He:
On the Crucial Role of Initialization for Matrix Factorization. CoRR abs/2410.18965 (2024) - [i63]Jacob L. Block, Sundararajan Srinivasan, Liam Collins, Aryan Mokhtari, Sanjay Shakkottai:
Meta-Learning Adaptable Foundation Models. CoRR abs/2410.22264 (2024) - 2023
- [j25]Qiujiang Jin, Aryan Mokhtari:
Non-asymptotic superlinear convergence of standard quasi-Newton methods. Math. Program. 200(1): 425-473 (2023) - [j24]Mohammad Fereydounian, Aryan Mokhtari, Ramtin Pedarsani, Hamed Hassani:
Provably Private Distributed Averaging Consensus: An Information-Theoretic Approach. IEEE Trans. Inf. Theory 69(11): 7317-7335 (2023) - [j23]Isidoros Tziotis, Zebang Shen, Ramtin Pedarsani, Hamed Hassani, Aryan Mokhtari:
Straggler-Resilient Personalized Federated Learning. Trans. Mach. Learn. Res. 2023 (2023) - [c74]Ruichen Jiang, Nazanin Abolfazli, Aryan Mokhtari, Erfan Yazdandoost Hamedani:
A Conditional Gradient-based Method for Simple Bilevel Optimization with Convex Lower-level Problem. AISTATS 2023: 10305-10323 - [c73]Advait Parulekar, Liam Collins, Karthikeyan Shanmugam, Aryan Mokhtari, Sanjay Shakkottai:
InfoNCE Loss Provably Learns Cluster-Preserving Representations. COLT 2023: 1914-1961 - [c72]Ruichen Jiang, Qiujiang Jin, Aryan Mokhtari:
Online Learning Guided Curvature Approximation: A Quasi-Newton Method with Global Non-Asymptotic Superlinear Convergence. COLT 2023: 1962-1992 - [c71]Jerry Gu, Liam Collins, Debashri Roy, Aryan Mokhtari, Sanjay Shakkottai, Kaushik R. Chowdhury:
Meta-Learning for Image-Guided Millimeter-Wave Beam Selection in Unseen Environments. ICASSP 2023: 1-5 - [c70]Parikshit Hegde, Gustavo de Veciana, Aryan Mokhtari:
Network Adaptive Federated Learning: Congestion and Lossy Compression. INFOCOM 2023: 1-10 - [c69]Jincheng Cao, Ruichen Jiang, Nazanin Abolfazli, Erfan Yazdandoost Hamedani, Aryan Mokhtari:
Projection-Free Methods for Stochastic Simple Bilevel Optimization with Convex Lower-level Problem. NeurIPS 2023 - [c68]Ruichen Jiang, Aryan Mokhtari:
Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization. NeurIPS 2023 - [c67]Nived Rajaraman, Devvrit, Aryan Mokhtari, Kannan Ramchandran:
Greedy Pruning with Group Lasso Provably Generalizes for Matrix Sensing. NeurIPS 2023 - [i62]Parikshit Hegde, Gustavo de Veciana, Aryan Mokhtari:
Network Adaptive Federated Learning: Congestion and Lossy Compression. CoRR abs/2301.04430 (2023) - [i61]Advait Parulekar, Liam Collins, Karthikeyan Shanmugam, Aryan Mokhtari, Sanjay Shakkottai:
InfoNCE Loss Provably Learns Cluster-Preserving Representations. CoRR abs/2302.07920 (2023) - [i60]Ruichen Jiang, Qiujiang Jin, Aryan Mokhtari:
Online Learning Guided Curvature Approximation: A Quasi-Newton Method with Global Non-Asymptotic Superlinear Convergence. CoRR abs/2302.08580 (2023) - [i59]Nived Rajaraman, Devvrit, Aryan Mokhtari, Kannan Ramchandran:
Greedy Pruning with Group Lasso Provably Generalizes for Matrix Sensing and Neural Networks with Quadratic Activations. CoRR abs/2303.11453 (2023) - [i58]Ruichen Jiang, Aryan Mokhtari:
Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization. CoRR abs/2306.02212 (2023) - [i57]Zhan Gao, Aryan Mokhtari, Alec Koppel:
Limited-Memory Greedy Quasi-Newton Method with Non-asymptotic Superlinear Convergence Rate. CoRR abs/2306.15444 (2023) - [i56]Liam Collins, Hamed Hassani, Mahdi Soltanolkotabi, Aryan Mokhtari, Sanjay Shakkottai:
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks. CoRR abs/2307.06887 (2023) - [i55]Jincheng Cao, Ruichen Jiang, Nazanin Abolfazli, Erfan Yazdandoost Hamedani, Aryan Mokhtari:
Projection-Free Methods for Stochastic Simple Bilevel Optimization with Convex Lower-level Problem. CoRR abs/2308.07536 (2023) - 2022
- [j22]Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari, Ramtin Pedarsani:
Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity. IEEE J. Sel. Areas Inf. Theory 3(2): 197-205 (2022) - [c66]Arman Adibi, Aryan Mokhtari, Hamed Hassani:
Minimax Optimization: The Case of Convex-Submodular. AISTATS 2022: 3556-3580 - [c65]Sen Lin, Ming Shi, Anish Arora, Raef Bassily, Elisa Bertino, Constantine Caramanis, Kaushik R. Chowdhury, Eylem Ekici, Atilla Eryilmaz, Stratis Ioannidis, Nan Jiang, Gauri Joshi, Jim Kurose, Yingbin Liang, Zhiqiang Lin, Jia Liu, Mingyan Liu, Tommaso Melodia, Aryan Mokhtari, Rob Nowak, Sewoong Oh, Srini Parthasarathy, Chunyi Peng, Hulya Seferoglu, Ness B. Shroff, Sanjay Shakkottai, Kannan Srinivasan, Ameet Talwalkar, Aylin Yener, Lei Ying:
Leveraging Synergies Between AI and Networking to Build Next Generation Edge Networks. CIC 2022: 16-25 - [c64]Liam Collins, Aryan Mokhtari, Sanjay Shakkottai:
How Does the Task Landscape Affect MAML Performance? CoLLAs 2022: 23-59 - [c63]Matthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, Rachel A. Ward:
The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance. COLT 2022: 313-355 - [c62]Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari, Ramtin Pedarsani:
Adaptive Node Participation for Straggler-Resilient Federated Learning. ICASSP 2022: 8762-8766 - [c61]Liam Collins, Aryan Mokhtari, Sewoong Oh, Sanjay Shakkottai:
MAML and ANIL Provably Learn Representations. ICML 2022: 4238-4310 - [c60]Qiujiang Jin, Alec Koppel, Ketan Rajawat, Aryan Mokhtari:
Sharpened Quasi-Newton Methods: Faster Superlinear Rate and Larger Local Convergence Neighborhood. ICML 2022: 10228-10250 - [c59]Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai:
FedAvg with Fine Tuning: Local Updates Lead to Representation Learning. NeurIPS 2022 - [c58]Mao Ye, Ruichen Jiang, Haoxiang Wang, Dhruv Choudhary, Xiaocong Du, Bhargav Bhushanam, Aryan Mokhtari, Arun Kejariwal, Qiang Liu:
Future gradient descent for adapting the temporal shifting data distribution in online recommendation systems. UAI 2022: 2256-2266 - [i54]Liam Collins, Aryan Mokhtari, Sewoong Oh, Sanjay Shakkottai:
MAML and ANIL Provably Learn Representations. CoRR abs/2202.03483 (2022) - [i53]Matthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, Rachel A. Ward:
The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance. CoRR abs/2202.05791 (2022) - [i52]Mohammad Fereydounian, Aryan Mokhtari, Ramtin Pedarsani, Hamed Hassani:
Provably Private Distributed Averaging Consensus: An Information-Theoretic Approach. CoRR abs/2202.09398 (2022) - [i51]Ruichen Jiang, Aryan Mokhtari:
Generalized Optimistic Methods for Convex-Concave Saddle Point Problems. CoRR abs/2202.09674 (2022) - [i50]Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai:
FedAvg with Fine Tuning: Local Updates Lead to Representation Learning. CoRR abs/2205.13692 (2022) - [i49]Isidoros Tziotis, Zebang Shen, Ramtin Pedarsani, Hamed Hassani, Aryan Mokhtari:
Straggler-Resilient Personalized Federated Learning. CoRR abs/2206.02078 (2022) - [i48]Ruichen Jiang, Nazanin Abolfazli, Aryan Mokhtari, Erfan Yazdandoost Hamedani:
Generalized Frank-Wolfe Algorithm for Bilevel Optimization. CoRR abs/2206.08868 (2022) - [i47]Mao Ye, Ruichen Jiang, Haoxiang Wang, Dhruv Choudhary, Xiaocong Du, Bhargav Bhushanam, Aryan Mokhtari, Arun Kejariwal, Qiang Liu:
Future Gradient Descent for Adapting the Temporal Shifting Data Distribution in Online Recommendation Systems. CoRR abs/2209.01143 (2022) - 2021
- [c57]Farzin Haddadpour, Mohammad Mahdi Kamani, Aryan Mokhtari, Mehrdad Mahdavi:
Federated Learning with Compression: Unified Analysis and Sharp Guarantees. AISTATS 2021: 2350-2358 - [c56]Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai:
Exploiting Shared Representations for Personalized Federated Learning. ICML 2021: 2089-2099 - [c55]Alireza Fallah, Kristian Georgiev, Aryan Mokhtari, Asuman E. Ozdaglar:
On the Convergence Theory of Debiased Model-Agnostic Meta-Reinforcement Learning. NeurIPS 2021: 3096-3107 - [c54]Qiujiang Jin, Aryan Mokhtari:
Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach. NeurIPS 2021: 3824-3835 - [c53]Alireza Fallah, Aryan Mokhtari, Asuman E. Ozdaglar:
Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks. NeurIPS 2021: 5469-5480 - [i46]Alireza Fallah, Aryan Mokhtari, Asuman E. Ozdaglar:
Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks. CoRR abs/2102.03832 (2021) - [i45]Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai:
Exploiting Shared Representations for Personalized Federated Learning. CoRR abs/2102.07078 (2021) - [i44]Qiujiang Jin, Aryan Mokhtari:
Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach. CoRR abs/2106.05445 (2021) - [i43]Arman Adibi, Aryan Mokhtari, Hamed Hassani:
Minimax Optimization: The Case of Convex-Submodular. CoRR abs/2111.01262 (2021) - 2020
- [j21]Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization. J. Mach. Learn. Res. 21: 105:1-105:49 (2020) - [j20]Aryan Mokhtari, Alec Koppel, Martin Takác, Alejandro Ribeiro:
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning. J. Mach. Learn. Res. 21: 120:1-120:51 (2020) - [j19]Aryan Mokhtari, Alejandro Ribeiro:
Stochastic Quasi-Newton Methods. Proc. IEEE 108(11): 1906-1922 (2020) - [j18]Aryan Mokhtari, Asuman E. Ozdaglar, Sarath Pattathil:
Convergence Rate of 풪(1/k) for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems. SIAM J. Optim. 30(4): 3230-3251 (2020) - [j17]Hamed Hassani, Amin Karbasi, Aryan Mokhtari, Zebang Shen:
Stochastic Conditional Gradient++: (Non)Convex Minimization and Continuous Submodular Maximization. SIAM J. Optim. 30(4): 3315-3344 (2020) - [j16]Aryan Mokhtari, Alec Koppel:
High-Dimensional Nonconvex Stochastic Optimization by Doubly Stochastic Successive Convex Approximation. IEEE Trans. Signal Process. 68: 6287-6302 (2020) - [c52]Alireza Fallah, Aryan Mokhtari, Asuman E. Ozdaglar:
On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms. AISTATS 2020: 1082-1092 - [c51]Aryan Mokhtari, Asuman E. Ozdaglar, Sarath Pattathil:
A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach. AISTATS 2020: 1497-1507 - [c50]Saeed Soori, Konstantin Mishchenko, Aryan Mokhtari, Maryam Mehri Dehnavi, Mert Gürbüzbalaban:
DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate. AISTATS 2020: 1965-1976 - [c49]Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ali Jadbabaie, Ramtin Pedarsani:
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization. AISTATS 2020: 2021-2031 - [c48]Majid Jahani, Xi He, Chenxin Ma, Aryan Mokhtari, Dheevatsa Mudigere, Alejandro Ribeiro, Martin Takác:
Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy. AISTATS 2020: 2634-2644 - [c47]Mingrui Zhang, Lin Chen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
Quantized Frank-Wolfe: Faster Optimization, Lower Communication, and Projection Free. AISTATS 2020: 3696-3706 - [c46]Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
One Sample Stochastic Frank-Wolfe. AISTATS 2020: 4012-4023 - [c45]Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani:
Quantized Decentralized Stochastic Learning over Directed Graphs. ICML 2020: 9324-9333 - [c44]Alireza Fallah, Aryan Mokhtari, Asuman E. Ozdaglar:
Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach. NeurIPS 2020 - [c43]Arman Adibi, Aryan Mokhtari, Hamed Hassani:
Submodular Meta-Learning. NeurIPS 2020 - [c42]Liam Collins, Aryan Mokhtari, Sanjay Shakkottai:
Task-Robust Model-Agnostic Meta-Learning. NeurIPS 2020 - [c41]Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari:
Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking. NeurIPS 2020 - [i42]Liam Collins, Aryan Mokhtari, Sanjay Shakkottai:
Distribution-Agnostic Model-Agnostic Meta-Learning. CoRR abs/2002.04766 (2020) - [i41]Alireza Fallah, Aryan Mokhtari, Asuman E. Ozdaglar:
Provably Convergent Policy Gradient Methods for Model-Agnostic Meta-Reinforcement Learning. CoRR abs/2002.05135 (2020) - [i40]Alireza Fallah, Aryan Mokhtari, Asuman E. Ozdaglar:
Personalized Federated Learning: A Meta-Learning Approach. CoRR abs/2002.07948 (2020) - [i39]Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani:
Quantized Push-sum for Gossip and Decentralized Optimization over Directed Graphs. CoRR abs/2002.09964 (2020) - [i38]Qiujiang Jin, Aryan Mokhtari:
Non-asymptotic Superlinear Convergence of Standard Quasi-Newton Methods. CoRR abs/2003.13607 (2020) - [i37]Mohammad Fereydounian, Zebang Shen, Aryan Mokhtari, Amin Karbasi, Hamed Hassani:
Safe Learning under Uncertain Objectives and Constraints. CoRR abs/2006.13326 (2020) - [i36]Farzin Haddadpour, Mohammad Mahdi Kamani, Aryan Mokhtari, Mehrdad Mahdavi:
Federated Learning with Compression: Unified Analysis and Sharp Guarantees. CoRR abs/2007.01154 (2020) - [i35]Arman Adibi, Aryan Mokhtari, Hamed Hassani:
Submodular Meta-Learning. CoRR abs/2007.05852 (2020) - [i34]Liam Collins, Aryan Mokhtari, Sanjay Shakkottai:
Why Does MAML Outperform ERM? An Optimization Perspective. CoRR abs/2010.14672 (2020) - [i33]Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari, Ramtin Pedarsani:
Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity. CoRR abs/2012.14453 (2020)
2010 – 2019
- 2019
- [j15]Santiago Paternain, Aryan Mokhtari, Alejandro Ribeiro:
A Newton-Based Method for Nonconvex Optimization with Fast Evasion of Saddle Points. SIAM J. Optim. 29(1): 343-368 (2019) - [j14]Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani:
An Exact Quantized Decentralized Gradient Descent Algorithm. IEEE Trans. Signal Process. 67(19): 4934-4947 (2019) - [j13]Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro:
A Primal-Dual Quasi-Newton Method for Exact Consensus Optimization. IEEE Trans. Signal Process. 67(23): 5983-5997 (2019) - [c40]Aryan Mokhtari, Asuman E. Ozdaglar, Ali Jadbabaie:
Efficient Nonconvex Empirical Risk Minimization via Adaptive Sample Size Methods. AISTATS 2019: 2485-2494 - [c39]Jingzhao Zhang, César A. Uribe, Aryan Mokhtari, Ali Jadbabaie:
Achieving Acceleration in Distributed Optimization via Direct Discretization of the Heavy-Ball ODE. ACC 2019: 3408-3413 - [c38]Amirhossein Reisizadeh, Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani:
Robust and Communication-Efficient Collaborative Learning. NeurIPS 2019: 8386-8397 - [c37]Amin Karbasi, Hamed Hassani, Aryan Mokhtari, Zebang Shen:
Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match. NeurIPS 2019: 13066-13076 - [i32]Aryan Mokhtari, Asuman E. Ozdaglar, Sarath Pattathil:
A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach. CoRR abs/1901.08511 (2019) - [i31]Mingrui Zhang, Lin Chen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
Quantized Frank-Wolfe: Communication-Efficient Distributed Optimization. CoRR abs/1902.06332 (2019) - [i30]Hamed Hassani, Amin Karbasi, Aryan Mokhtari, Zebang Shen:
Stochastic Conditional Gradient++. CoRR abs/1902.06992 (2019) - [i29]Aryan Mokhtari, Asuman E. Ozdaglar, Sarath Pattathil:
Proximal Point Approximations Achieving a Convergence Rate of O(1/k) for Smooth Convex-Concave Saddle Point Problems: Optimistic Gradient and Extra-gradient Methods. CoRR abs/1906.01115 (2019) - [i28]Amirhossein Reisizadeh, Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani:
Robust and Communication-Efficient Collaborative Learning. CoRR abs/1907.10595 (2019) - [i27]Alireza Fallah, Aryan Mokhtari, Asuman E. Ozdaglar:
On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms. CoRR abs/1908.10400 (2019) - [i26]Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ali Jadbabaie, Ramtin Pedarsani:
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization. CoRR abs/1909.13014 (2019) - [i25]Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
One Sample Stochastic Frank-Wolfe. CoRR abs/1910.04322 (2019) - [i24]Weijie Liu, Aryan Mokhtari, Asuman E. Ozdaglar, Sarath Pattathil, Zebang Shen, Nenggan Zheng:
A Decentralized Proximal Point-type Method for Saddle Point Problems. CoRR abs/1910.14380 (2019) - 2018
- [j12]Aryan Mokhtari, Mert Gürbüzbalaban, Alejandro Ribeiro:
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate. SIAM J. Optim. 28(2): 1420-1447 (2018) - [j11]Aryan Mokhtari, Mark Eisen, Alejandro Ribeiro:
IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate. SIAM J. Optim. 28(2): 1670-1698 (2018) - [c36]Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro:
Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method. AISTATS 2018: 1447-1455 - [c35]Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap. AISTATS 2018: 1886-1895 - [c34]Santiago Paternain, Aryan Mokhtari, Alejandro Ribeiro:
A Newton Method for Faster Navigation in Cluttered Environments. CDC 2018: 4084-4090 - [c33]Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani:
Quantized Decentralized Consensus Optimization. CDC 2018: 5838-5843 - [c32]Alec Koppel, Aryan Mokhtari, Alejandro Ribeiro:
Parallel Stochastic Successive Convex Approximation Method for Large-Scale Dictionary Learning. ICASSP 2018: 2771-2775 - [c31]Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings. ICML 2018: 3613-3622 - [c30]Zebang Shen, Aryan Mokhtari, Tengfei Zhou, Peilin Zhao, Hui Qian:
Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication. ICML 2018: 4631-4640 - [c29]Aryan Mokhtari, Asuman E. Ozdaglar, Ali Jadbabaie:
Escaping Saddle Points in Constrained Optimization. NeurIPS 2018: 3633-3643 - [c28]Jingzhao Zhang, Aryan Mokhtari, Suvrit Sra, Ali Jadbabaie:
Direct Runge-Kutta Discretization Achieves Acceleration. NeurIPS 2018: 3904-3913 - [i23]Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization. CoRR abs/1804.09554 (2018) - [i22]Jingzhao Zhang, Aryan Mokhtari, Suvrit Sra, Ali Jadbabaie:
Direct Runge-Kutta Discretization Achieves Acceleration. CoRR abs/1805.00521 (2018) - [i21]Zebang Shen, Aryan Mokhtari, Tengfei Zhou, Peilin Zhao, Hui Qian:
Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication. CoRR abs/1805.09969 (2018) - [i20]Amirhossein Reisizadeh, Aryan Mokhtari, S. Hamed Hassani, Ramtin Pedarsani:
Quantized Decentralized Consensus Optimization. CoRR abs/1806.11536 (2018) - [i19]