default search action
Murat A. Erdogdu
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j5]Ye He, Tyler Farghly, Krishnakumar Balasubramanian, Murat A. Erdogdu:
Mean-Square Analysis of Discretized Itô Diffusions for Heavy-tailed Sampling. J. Mach. Learn. Res. 25: 43:1-43:44 (2024) - [j4]Ye He, Krishnakumar Balasubramanian, Murat A. Erdogdu:
An Analysis of Transformed Unadjusted Langevin Algorithm for Heavy-Tailed Sampling. IEEE Trans. Inf. Theory 70(1): 571-593 (2024) - [c41]Ayoub El Hanchi, Chris J. Maddison, Murat A. Erdogdu:
Minimax Linear Regression under the Quantile Risk. COLT 2024: 1516-1572 - [c40]Yunbum Kook, Matthew Shunshi Zhang, Sinho Chewi, Murat A. Erdogdu, Mufan (Bill) Li:
Sampling from the Mean-Field Stationary Distribution. COLT 2024: 3099-3136 - [c39]Nuri Mert Vural, Murat A. Erdogdu:
Pruning is Optimal for Learning Sparse Features in High-Dimensions. COLT 2024: 4787-4861 - [i27]Yunbum Kook, Matthew Shunshi Zhang, Sinho Chewi, Murat A. Erdogdu, Mufan Bill Li:
Sampling from the Mean-Field Stationary Distribution. CoRR abs/2402.07355 (2024) - [i26]Nuri Mert Vural, Murat A. Erdogdu:
Pruning is Optimal for Learning Sparse Features in High-Dimensions. CoRR abs/2406.08658 (2024) - [i25]Alireza Mousavi Hosseini, Denny Wu, Murat A. Erdogdu:
Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics. CoRR abs/2408.07254 (2024) - 2023
- [c38]Alireza Mousavi Hosseini, Tyler K. Farghly, Ye He, Krishna Balasubramanian, Murat A. Erdogdu:
Towards a Complete Analysis of Langevin Monte Carlo: Beyond Poincaré Inequality. COLT 2023: 1-35 - [c37]Matthew Shunshi Zhang, Sinho Chewi, Mufan (Bill) Li, Krishna Balasubramanian, Murat A. Erdogdu:
Improved Discretization Analysis for Underdamped Langevin Monte Carlo. COLT 2023: 36-71 - [c36]Alireza Mousavi Hosseini, Sejun Park, Manuela Girotti, Ioannis Mitliagkas, Murat A. Erdogdu:
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD. ICLR 2023 - [c35]Jimmy Ba, Murat A. Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu:
Learning in the Presence of Low-dimensional Structure: A Spiked Random Matrix Perspective. NeurIPS 2023 - [c34]Ayoub El Hanchi, Murat A. Erdogdu:
Optimal Excess Risk Bounds for Empirical Risk Minimization on p-Norm Linear Regression. NeurIPS 2023 - [c33]Alireza Mousavi Hosseini, Denny Wu, Taiji Suzuki, Murat A. Erdogdu:
Gradient-Based Feature Learning under Structured Data. NeurIPS 2023 - [c32]Tyler Kastner, Murat A. Erdogdu, Amir-massoud Farahmand:
Distributional Model Equivalence for Risk-Sensitive Reinforcement Learning. NeurIPS 2023 - [i24]Ye He, Tyler Farghly, Krishnakumar Balasubramanian, Murat A. Erdogdu:
Mean-Square Analysis of Discretized Itô Diffusions for Heavy-tailed Sampling. CoRR abs/2303.00570 (2023) - [i23]Tyler Kastner, Murat A. Erdogdu, Amir-massoud Farahmand:
Distributional Model Equivalence for Risk-Sensitive Reinforcement Learning. CoRR abs/2307.01708 (2023) - [i22]Alireza Mousavi Hosseini, Denny Wu, Taiji Suzuki, Murat A. Erdogdu:
Gradient-Based Feature Learning under Structured Data. CoRR abs/2309.03843 (2023) - [i21]Avital Shafran, Ilia Shumailov, Murat A. Erdogdu, Nicolas Papernot:
Beyond Labeling Oracles: What does it mean to steal ML models? CoRR abs/2310.01959 (2023) - 2022
- [j3]Murat A. Erdogdu, Asuman E. Ozdaglar, Pablo A. Parrilo, Nuri Denizcan Vanli:
Convergence rate of block-coordinate maximization Burer-Monteiro method for solving large SDPs. Math. Program. 195(1): 243-281 (2022) - [c31]Matthew Shunshi Zhang, Murat A. Erdogdu, Animesh Garg:
Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings. AAAI 2022: 9066-9073 - [c30]Murat A. Erdogdu, Rasa Hosseinzadeh, Shunshi Zhang:
Convergence of Langevin Monte Carlo in Chi-Squared and Rényi Divergence. AISTATS 2022: 8151-8175 - [c29]Sinho Chewi, Murat A. Erdogdu, Mufan (Bill) Li, Ruoqi Shen, Shunshi Zhang:
Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev. COLT 2022: 1-2 - [c28]Nuri Mert Vural, Lu Yu, Krishnakumar Balasubramanian, Stanislav Volgushev, Murat A. Erdogdu:
Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance. COLT 2022: 65-102 - [c27]Krishna Balasubramanian, Sinho Chewi, Murat A. Erdogdu, Adil Salim, Shunshi Zhang:
Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo. COLT 2022: 2896-2923 - [c26]Jimmy Ba, Murat A. Erdogdu, Marzyeh Ghassemi, Shengyang Sun, Taiji Suzuki, Denny Wu, Tianzong Zhang:
Understanding the Variance Collapse of SVGD in High Dimensions. ICLR 2022 - [c25]Jimmy Ba, Murat A. Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu, Greg Yang:
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation. NeurIPS 2022 - [c24]Sejun Park, Umut Simsekli, Murat A. Erdogdu:
Generalization Bounds for Stochastic Gradient Descent via Localized $\varepsilon$-Covers. NeurIPS 2022 - [i20]Nuri Mert Vural, Lu Yu, Krishnakumar Balasubramanian, Stanislav Volgushev, Murat A. Erdogdu:
Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance. CoRR abs/2202.11632 (2022) - [i19]Jimmy Ba, Murat A. Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu, Greg Yang:
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation. CoRR abs/2205.01445 (2022) - [i18]Adam Dziedzic, Stephan Rabanser, Mohammad Yaghini, Armin Ale, Murat A. Erdogdu, Nicolas Papernot:
p-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations. CoRR abs/2207.12545 (2022) - [i17]Sejun Park, Umut Simsekli, Murat A. Erdogdu:
Generalization Bounds for Stochastic Gradient Descent via Localized ε-Covers. CoRR abs/2209.08951 (2022) - [i16]Alireza Mousavi Hosseini, Sejun Park, Manuela Girotti, Ioannis Mitliagkas, Murat A. Erdogdu:
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD. CoRR abs/2209.14863 (2022) - 2021
- [c23]Murat A. Erdogdu, Rasa Hosseinzadeh:
On the Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness. COLT 2021: 1776-1822 - [c22]Lu Yu, Krishnakumar Balasubramanian, Stanislav Volgushev, Murat A. Erdogdu:
An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias. NeurIPS 2021: 4234-4248 - [c21]Abhishek Roy, Krishnakumar Balasubramanian, Murat A. Erdogdu:
On Empirical Risk Minimization with Dependent and Heavy-Tailed Data. NeurIPS 2021: 8913-8926 - [c20]Ilia Shumailov, Zakhar Shumaylov, Dmitry Kazhdan, Yiren Zhao, Nicolas Papernot, Murat A. Erdogdu, Ross J. Anderson:
Manipulating SGD with Data Ordering Attacks. NeurIPS 2021: 18021-18032 - [c19]Alexander Camuto, George Deligiannidis, Murat A. Erdogdu, Mert Gürbüzbalaban, Umut Simsekli, Lingjiong Zhu:
Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms. NeurIPS 2021: 18774-18788 - [c18]Hongjian Wang, Mert Gürbüzbalaban, Lingjiong Zhu, Umut Simsekli, Murat A. Erdogdu:
Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance. NeurIPS 2021: 18866-18877 - [c17]Melih Barsbey, Milad Sefidgaran, Murat A. Erdogdu, Gaël Richard, Umut Simsekli:
Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks. NeurIPS 2021: 29364-29378 - [i15]Ilia Shumailov, Zakhar Shumaylov, Dmitry Kazhdan, Yiren Zhao, Nicolas Papernot, Murat A. Erdogdu, Ross J. Anderson:
Manipulating SGD with Data Ordering Attacks. CoRR abs/2104.09667 (2021) - [i14]Melih Barsbey, Milad Sefidgaran, Murat A. Erdogdu, Gaël Richard, Umut Simsekli:
Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks. CoRR abs/2106.03795 (2021) - [i13]Alexander Camuto, George Deligiannidis, Murat A. Erdogdu, Mert Gürbüzbalaban, Umut Simsekli, Lingjiong Zhu:
Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms. CoRR abs/2106.04881 (2021) - [i12]Matthew Shunshi Zhang, Murat A. Erdogdu, Animesh Garg:
Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings. CoRR abs/2111.00185 (2021) - 2020
- [c16]Jimmy Ba, Murat A. Erdogdu, Taiji Suzuki, Denny Wu, Tianzong Zhang:
Generalization of Two-layer Neural Networks: An Asymptotic Viewpoint. ICLR 2020 - [c15]Ye He, Krishnakumar Balasubramanian, Murat A. Erdogdu:
On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint Sampling Method. NeurIPS 2020 - [c14]Umut Simsekli, Ozan Sener, George Deligiannidis, Murat A. Erdogdu:
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks. NeurIPS 2020 - [i11]Murat A. Erdogdu, Rasa Hosseinzadeh:
On the Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness. CoRR abs/2005.13097 (2020) - [i10]Lu Yu, Krishnakumar Balasubramanian, Stanislav Volgushev, Murat A. Erdogdu:
An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias. CoRR abs/2006.07904 (2020) - [i9]Umut Simsekli, Ozan Sener, George Deligiannidis, Murat A. Erdogdu:
Hausdorff Dimension, Stochastic Differential Equations, and Generalization in Neural Networks. CoRR abs/2006.09313 (2020) - [i8]Murat A. Erdogdu, Rasa Hosseinzadeh:
A Brief Note on the Convergence of Langevin Monte Carlo in Chi-Square Divergence. CoRR abs/2007.11612 (2020) - [i7]Mufan (Bill) Li, Murat A. Erdogdu:
Riemannian Langevin Algorithm for Solving Semidefinite Programs. CoRR abs/2010.11176 (2020) - [i6]Ye He, Krishnakumar Balasubramanian, Murat A. Erdogdu:
On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint Sampling Method. CoRR abs/2011.03176 (2020)
2010 – 2019
- 2019
- [j2]Murat A. Erdogdu, Mohsen Bayati, Lee H. Dicker:
Scalable Approximations for Generalized Linear Problems. J. Mach. Learn. Res. 20: 7:1-7:45 (2019) - [c13]Andreas Anastasiou, Krishnakumar Balasubramanian, Murat A. Erdogdu:
Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT. COLT 2019: 115-137 - [i5]Xuechen Li, Denny Wu, Lester Mackey, Murat A. Erdogdu:
Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond. CoRR abs/1906.07868 (2019) - 2018
- [c12]Murat A. Erdogdu, Lester Mackey, Ohad Shamir:
Global Non-convex Optimization with Discretized Diffusions. NeurIPS 2018: 9694-9703 - [i4]Murat A. Erdogdu, Asuman E. Ozdaglar, Pablo A. Parrilo, Nuri Denizcan Vanli:
Convergence Rate of Block-Coordinate Maximization Burer-Monteiro Method for Solving Large SDPs. CoRR abs/1807.04428 (2018) - [i3]Murat A. Erdogdu, Lester Mackey, Ohad Shamir:
Global Non-convex Optimization with Discretized Diffusions. CoRR abs/1810.12361 (2018) - 2017
- [c11]Murat A. Erdogdu:
Generalized Hessian approximations via Stein's lemma for constrained minimization. ITA 2017: 1-8 - [c10]Murat A. Erdogdu, Yash Deshpande, Andrea Montanari:
Inference in Graphical Models via Semidefinite Programming Hierarchies. NIPS 2017: 417-425 - [c9]Hakan Inan, Murat A. Erdogdu, Mark J. Schnitzer:
Robust Estimation of Neural Signals in Calcium Imaging. NIPS 2017: 2901-2910 - [i2]Murat A. Erdogdu, Yash Deshpande, Andrea Montanari:
Inference in Graphical Models via Semidefinite Programming Hierarchies. CoRR abs/1709.06525 (2017) - 2016
- [j1]Murat A. Erdogdu:
Newton-Stein Method: An Optimization Method for GLMs via Stein's Lemma. J. Mach. Learn. Res. 17: 216:1-216:52 (2016) - [c8]Lee H. Dicker, Murat A. Erdogdu:
Maximum Likelihood for Variance Estimation in High-Dimensional Linear Models. AISTATS 2016: 159-167 - [c7]Murat A. Erdogdu, Lee H. Dicker, Mohsen Bayati:
Scaled Least Squares Estimator for GLMs in Large-Scale Problems. NIPS 2016: 3324-3332 - 2015
- [c6]Murat A. Erdogdu, Nadia Fawaz, Andrea Montanari:
Privacy-Utility Trade-Off for Time-Series with Application to Smart-Meter Data. AAAI Workshop: Computational Sustainability 2015 - [c5]Murat A. Erdogdu, Nadia Fawaz:
Privacy-utility trade-off under continual observation. ISIT 2015: 1801-1805 - [c4]Qingyuan Zhao, Murat A. Erdogdu, Hera Y. He, Anand Rajaraman, Jure Leskovec:
SEISMIC: A Self-Exciting Point Process Model for Predicting Tweet Popularity. KDD 2015: 1513-1522 - [c3]Murat A. Erdogdu:
Newton-Stein Method: A Second Order Method for GLMs via Stein's Lemma. NIPS 2015: 1216-1224 - [c2]Murat A. Erdogdu, Andrea Montanari:
Convergence rates of sub-sampled Newton methods. NIPS 2015: 3052-3060 - [i1]Qingyuan Zhao, Murat A. Erdogdu, Hera Y. He, Anand Rajaraman, Jure Leskovec:
SEISMIC: A Self-Exciting Point Process Model for Predicting Tweet Popularity. CoRR abs/1506.02594 (2015) - 2013
- [c1]Mohsen Bayati, Murat A. Erdogdu, Andrea Montanari:
Estimating LASSO Risk and Noise Level. NIPS 2013: 944-952
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-02 21:36 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint