


Остановите войну!
for scientists:


default search action
Jason D. Lee
Person information

- affiliation: Stanford University, Institute of Computational and Mathematical Engineering
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [i101]Jikai Jin, Zhiyuan Li, Kaifeng Lyu, Simon S. Du, Jason D. Lee:
Understanding Incremental Learning of Gradient Descent: A Fine-grained Analysis of Matrix Sensing. CoRR abs/2301.11500 (2023) - [i100]Angeliki Giannou, Shashank Rajput, Jy-yong Sohn, Kangwook Lee, Jason D. Lee, Dimitris S. Papailiopoulos:
Looped Transformers as Programmable Computers. CoRR abs/2301.13196 (2023) - [i99]Masatoshi Uehara, Nathan Kallus, Jason D. Lee, Wen Sun:
Refined Value-Based Offline RL under Realizability and Partial Coverage. CoRR abs/2302.02392 (2023) - [i98]Hadi Daneshmand, Jason D. Lee, Chi Jin:
Efficient displacement convex optimization with particle gradient descent. CoRR abs/2302.04753 (2023) - [i97]Hanlin Zhu, Ruosong Wang, Jason D. Lee:
Provably Efficient Reinforcement Learning via Surprise Bound. CoRR abs/2302.11634 (2023) - [i96]Zhuoqing Song, Jason D. Lee, Zhuoran Yang:
Can We Find Nash Equilibria at a Linear Rate in Markov Games? CoRR abs/2303.03095 (2023) - [i95]Yulai Zhao, Zhuoran Yang, Zhaoran Wang, Jason D. Lee:
Local Optimization Achieves Global Optimality in Multi-Agent Reinforcement Learning. CoRR abs/2305.04819 (2023) - [i94]Eshaan Nichani, Alex Damian, Jason D. Lee:
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks. CoRR abs/2305.06986 (2023) - [i93]Gen Li, Wenhao Zhan, Jason D. Lee, Yuejie Chi, Yuxin Chen:
Reward-agnostic Fine-tuning: Provable Statistical Benefits of Hybrid Reinforcement Learning. CoRR abs/2305.10282 (2023) - [i92]Alex Damian, Eshaan Nichani, Rong Ge, Jason D. Lee:
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models. CoRR abs/2305.10633 (2023) - [i91]Jingfeng Wu, Vladimir Braverman, Jason D. Lee:
Implicit Bias of Gradient Descent for Logistic Regression at the Edge of Stability. CoRR abs/2305.11788 (2023) - 2022
- [c77]Yulai Zhao, Yuandong Tian, Jason D. Lee, Simon S. Du:
Provably Efficient Policy Optimization for Two-Player Zero-Sum Markov Games. AISTATS 2022: 2736-2761 - [c76]Itay Safran, Jason D. Lee:
Optimization-Based Separations for Neural Networks. COLT 2022: 3-64 - [c75]Wenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, Jason D. Lee:
Offline Reinforcement Learning with Realizability and Single-policy Concentrability. COLT 2022: 2730-2775 - [c74]Alexandru Damian, Jason D. Lee, Mahdi Soltanolkotabi:
Neural Networks can Learn Representations with Gradient Descent. COLT 2022: 5413-5452 - [c73]DiJia Su, Jason D. Lee, John M. Mulvey, H. Vincent Poor:
Competitive Multi-Agent Reinforcement Learning with Self-Supervised Representation. ICASSP 2022: 4098-4102 - [c72]Baihe Huang, Jason D. Lee, Zhaoran Wang, Zhuoran Yang:
Towards General Function Approximation in Zero-Sum Markov Games. ICLR 2022 - [c71]Zhiyuan Li, Tianhao Wang, Jason D. Lee, Sanjeev Arora:
Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent. NeurIPS 2022 - [c70]Eshaan Nichani, Yu Bai, Jason D. Lee:
Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials. NeurIPS 2022 - [c69]Christopher De Sa, Satyen Kale, Jason D. Lee, Ayush Sekhari, Karthik Sridharan:
From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent. NeurIPS 2022 - [c68]Itay Safran, Gal Vardi, Jason D. Lee:
On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias. NeurIPS 2022 - [c67]Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun:
Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems. NeurIPS 2022 - [i90]Wenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, Jason D. Lee:
Offline Reinforcement Learning with Realizability and Single-policy Concentrability. CoRR abs/2202.04634 (2022) - [i89]Jiaqi Yang, Qi Lei, Jason D. Lee, Simon S. Du:
Nearly Minimax Algorithms for Linear Bandits with Shared Representation. CoRR abs/2203.15664 (2022) - [i88]Itay Safran, Gal Vardi, Jason D. Lee:
On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias. CoRR abs/2205.09072 (2022) - [i87]Wenhao Zhan, Jason D. Lee, Zhuoran Yang:
Decentralized Optimistic Hyperpolicy Mirror Descent: Provably No-Regret Learning in Markov Games. CoRR abs/2206.01588 (2022) - [i86]Eshaan Nichani, Yu Bai, Jason D. Lee:
Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials. CoRR abs/2206.03688 (2022) - [i85]Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun:
Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems. CoRR abs/2206.12020 (2022) - [i84]Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun:
Computationally Efficient PAC RL in POMDPs with Latent Determinism and Conditional Embeddings. CoRR abs/2206.12081 (2022) - [i83]Alex Damian, Jason D. Lee, Mahdi Soltanolkotabi:
Neural Networks can Learn Representations with Gradient Descent. CoRR abs/2206.15144 (2022) - [i82]Zhiyuan Li, Tianhao Wang, Jason D. Lee, Sanjeev Arora:
Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent. CoRR abs/2207.04036 (2022) - [i81]Wenhao Zhan, Masatoshi Uehara, Wen Sun, Jason D. Lee:
PAC Reinforcement Learning for Predictive State Representations. CoRR abs/2207.05738 (2022) - [i80]Alex Damian, Eshaan Nichani, Jason D. Lee:
Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability. CoRR abs/2209.15594 (2022) - [i79]Satyen Kale, Jason D. Lee, Chris De Sa, Ayush Sekhari, Karthik Sridharan:
From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent. CoRR abs/2210.06705 (2022) - [i78]Zihan Wang, Jason D. Lee, Qi Lei:
Reconstructing Training Data from Model Gradient, Provably. CoRR abs/2212.03714 (2022) - 2021
- [j9]Alekh Agarwal, Sham M. Kakade, Jason D. Lee, Gaurav Mahajan:
On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift. J. Mach. Learn. Res. 22: 98:1-98:76 (2021) - [j8]Songtao Lu
, Jason D. Lee, Meisam Razaviyayn
, Mingyi Hong
:
Linearized ADMM Converges to Second-Order Stationary Points for Non-Convex Problems. IEEE Trans. Signal Process. 69: 4859-4874 (2021) - [c66]Cong Fang, Jason D. Lee, Pengkun Yang, Tong Zhang:
Modeling from Features: a Mean-field Framework for Over-parameterized Deep Neural Networks. COLT 2021: 1887-1936 - [c65]Jeff Z. HaoChen, Colin Wei, Jason D. Lee, Tengyu Ma:
Shape Matters: Understanding the Implicit Bias of the Noise Covariance. COLT 2021: 2315-2357 - [c64]Simon Shaolei Du, Wei Hu, Sham M. Kakade, Jason D. Lee, Qi Lei:
Few-Shot Learning via Learning the Representation, Provably. ICLR 2021 - [c63]Jiaqi Yang, Wei Hu, Jason D. Lee, Simon Shaolei Du:
Impact of Representation Learning in Linear Bandits. ICLR 2021 - [c62]Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason D. Lee, Sham M. Kakade, Huan Wang, Caiming Xiong:
How Important is the Train-Validation Split in Meta-Learning? ICML 2021: 543-553 - [c61]Tianle Cai, Ruiqi Gao, Jason D. Lee, Qi Lei:
A Theory of Label Propagation for Subpopulation Shift. ICML 2021: 1170-1182 - [c60]Simon S. Du, Sham M. Kakade, Jason D. Lee, Shachar Lovett, Gaurav Mahajan, Wen Sun, Ruosong Wang:
Bilinear Classes: A Structural Framework for Provable Generalization in RL. ICML 2021: 2826-2836 - [c59]Qi Lei, Wei Hu, Jason D. Lee:
Near-Optimal Linear Regression under Distribution Shift. ICML 2021: 6164-6174 - [c58]Jason D. Lee, Qi Lei, Nikunj Saunshi, Jiacheng Zhuo:
Predicting What You Already Know Helps: Provable Self-Supervised Learning. NeurIPS 2021: 309-323 - [c57]Kurtland Chua, Qi Lei, Jason D. Lee:
How Fine-Tuning Allows for Effective Meta-Learning. NeurIPS 2021: 8871-8884 - [c56]Baihe Huang, Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang:
Going Beyond Linear RL: Sample Efficient Neural Function Approximation. NeurIPS 2021: 8968-8983 - [c55]Alex Damian, Tengyu Ma, Jason D. Lee:
Label Noise SGD Provably Prefers Flat Global Minimizers. NeurIPS 2021: 27449-27461 - [c54]Baihe Huang, Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang:
Optimal Gradient-based Algorithms for Non-concave Bandit Optimization. NeurIPS 2021: 29101-29115 - [i77]Yulai Zhao, Yuandong Tian, Jason D. Lee, Simon S. Du:
Provably Efficient Policy Gradient Methods for Two-Player Zero-Sum Markov Games. CoRR abs/2102.08903 (2021) - [i76]Tianle Cai, Ruiqi Gao, Jason D. Lee, Qi Lei:
A Theory of Label Propagation for Subpopulation Shift. CoRR abs/2102.11203 (2021) - [i75]DiJia Su, Jason D. Lee, John M. Mulvey, H. Vincent Poor:
MUSBO: Model-based Uncertainty Regularized and Sample Efficient Batch Optimization for Deployment Constrained Reinforcement Learning. CoRR abs/2102.11448 (2021) - [i74]Simon S. Du, Sham M. Kakade, Jason D. Lee, Shachar Lovett, Gaurav Mahajan, Wen Sun, Ruosong Wang:
Bilinear Classes: A Structural Framework for Provable Generalization in RL. CoRR abs/2103.10897 (2021) - [i73]Kurtland Chua, Qi Lei, Jason D. Lee:
How Fine-Tuning Allows for Effective Meta-Learning. CoRR abs/2105.02221 (2021) - [i72]Wenhao Zhan, Shicong Cen, Baihe Huang, Yuxin Chen, Jason D. Lee, Yuejie Chi:
Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence. CoRR abs/2105.11066 (2021) - [i71]Alex Damian, Tengyu Ma, Jason D. Lee:
Label Noise SGD Provably Prefers Flat Global Minimizers. CoRR abs/2106.06530 (2021) - [i70]Qi Lei, Wei Hu, Jason D. Lee:
Near-Optimal Linear Regression under Distribution Shift. CoRR abs/2106.12108 (2021) - [i69]Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei:
A Short Note on the Relationship of Information Gain and Eluder Dimension. CoRR abs/2107.02377 (2021) - [i68]Baihe Huang, Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang:
Optimal Gradient-based Algorithms for Non-concave Bandit Optimization. CoRR abs/2107.04518 (2021) - [i67]Baihe Huang, Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang:
Going Beyond Linear RL: Sample Efficient Neural Function Approximation. CoRR abs/2107.06466 (2021) - [i66]Baihe Huang, Jason D. Lee, Zhaoran Wang, Zhuoran Yang:
Towards General Function Approximation in Zero-Sum Markov Games. CoRR abs/2107.14702 (2021) - [i65]Xinyi Chen, Edgar Minasyan, Jason D. Lee, Elad Hazan:
Provable Regret Bounds for Deep Online Learning and Control. CoRR abs/2110.07807 (2021) - [i64]Kurtland Chua, Qi Lei, Jason D. Lee:
Provable Hierarchy-Based Meta-Reinforcement Learning. CoRR abs/2110.09507 (2021) - [i63]Itay Safran, Jason D. Lee:
Optimization-Based Separations for Neural Networks. CoRR abs/2112.02393 (2021) - 2020
- [j7]Damek Davis, Dmitriy Drusvyatskiy, Sham M. Kakade, Jason D. Lee:
Stochastic Subgradient Method Converges on Tame Functions. Found. Comput. Math. 20(1): 119-154 (2020) - [c53]Alekh Agarwal, Sham M. Kakade, Jason D. Lee, Gaurav Mahajan:
Optimality and Approximation with Policy Gradient Methods in Markov Decision Processes. COLT 2020: 64-66 - [c52]Blake E. Woodworth, Suriya Gunasekar, Jason D. Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, Nathan Srebro:
Kernel and Rich Regimes in Overparametrized Models. COLT 2020: 3635-3673 - [c51]Yu Bai, Jason D. Lee:
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks. ICLR 2020 - [c50]Qi Lei, Jason D. Lee, Alex Dimakis, Constantinos Daskalakis:
SGD Learns One-Layer Networks in WGANs. ICML 2020: 5799-5808 - [c49]Ashok Vardhan Makkuva, Amirhossein Taghvaei, Sewoong Oh, Jason D. Lee:
Optimal transport mapping via input convex neural networks. ICML 2020: 6672-6681 - [c48]Minshuo Chen, Yu Bai, Jason D. Lee, Tuo Zhao, Huan Wang, Caiming Xiong, Richard Socher:
Towards Understanding Hierarchical Learning: Benefits of Neural Representations. NeurIPS 2020 - [c47]Simon S. Du, Jason D. Lee, Gaurav Mahajan, Ruosong Wang:
Agnostic $Q$-learning with Function Approximation in Deterministic Systems: Near-Optimal Bounds on Approximation Error and Sample Complexity. NeurIPS 2020 - [c46]Yihong Gu, Weizhong Zhang, Cong Fang, Jason D. Lee, Tong Zhang:
How to Characterize The Landscape of Overparameterized Convolutional Neural Networks. NeurIPS 2020 - [c45]Kaiyi Ji, Jason D. Lee, Yingbin Liang, H. Vincent Poor:
Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters. NeurIPS 2020 - [c44]Jason D. Lee, Ruoqi Shen, Zhao Song, Mengdi Wang, Zheng Yu:
Generalized Leverage Score Sampling for Neural Networks. NeurIPS 2020 - [c43]Edward Moroshko, Blake E. Woodworth, Suriya Gunasekar, Jason D. Lee, Nati Srebro, Daniel Soudry:
Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy. NeurIPS 2020 - [c42]Jingtong Su, Yihang Chen, Tianle Cai, Tianhao Wu, Ruiqi Gao, Liwei Wang, Jason D. Lee:
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot. NeurIPS 2020 - [c41]Xiang Wang, Chenwei Wu, Jason D. Lee, Tengyu Ma, Rong Ge:
Beyond Lazy Training for Over-parameterized Tensor Decomposition. NeurIPS 2020 - [i62]Simon S. Du, Jason D. Lee, Gaurav Mahajan, Ruosong Wang:
Agnostic Q-learning with Function Approximation in Deterministic Systems: Tight Bounds on Approximation Error and Sample Complexity. CoRR abs/2002.07125 (2020) - [i61]Blake E. Woodworth, Suriya Gunasekar, Jason D. Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, Nathan Srebro:
Kernel and Rich Regimes in Overparametrized Models. CoRR abs/2002.09277 (2020) - [i60]Simon S. Du, Wei Hu, Sham M. Kakade, Jason D. Lee, Qi Lei:
Few-Shot Learning via Learning the Representation, Provably. CoRR abs/2002.09434 (2020) - [i59]Lemeng Wu, Mao Ye, Qi Lei, Jason D. Lee, Qiang Liu:
Steepest Descent Neural Architecture Optimization: Escaping Local Optimum with Signed Neural Splitting. CoRR abs/2003.10392 (2020) - [i58]Xi Chen, Jason D. Lee, He Li, Yun Yang:
Distributed Estimation for Principal Component Analysis: a Gap-free Approach. CoRR abs/2004.02336 (2020) - [i57]Jeff Z. HaoChen, Colin Wei, Jason D. Lee, Tengyu Ma:
Shape Matters: Understanding the Implicit Bias of the Noise Covariance. CoRR abs/2006.08680 (2020) - [i56]Kaiyi Ji, Jason D. Lee, Yingbin Liang, H. Vincent Poor:
Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters. CoRR abs/2006.09486 (2020) - [i55]Minshuo Chen, Yu Bai, Jason D. Lee, Tuo Zhao, Huan Wang, Caiming Xiong, Richard Socher:
Towards Understanding Hierarchical Learning: Benefits of Neural Representations. CoRR abs/2006.13436 (2020) - [i54]Cong Fang, Jason D. Lee, Pengkun Yang, Tong Zhang:
Modeling from Features: a Mean-field Framework for Over-parameterized Deep Neural Networks. CoRR abs/2007.01452 (2020) - [i53]Edward Moroshko, Suriya Gunasekar, Blake E. Woodworth, Jason D. Lee, Nathan Srebro, Daniel Soudry:
Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy. CoRR abs/2007.06738 (2020) - [i52]Jason D. Lee, Qi Lei, Nikunj Saunshi, Jiacheng Zhuo:
Predicting What You Already Know Helps: Provable Self-Supervised Learning. CoRR abs/2008.01064 (2020) - [i51]Jason D. Lee, Ruoqi Shen, Zhao Song, Mengdi Wang, Zheng Yu:
Generalized Leverage Score Sampling for Neural Networks. CoRR abs/2009.09829 (2020) - [i50]Jingtong Su, Yihang Chen, Tianle Cai, Tianhao Wu, Ruiqi Gao, Liwei Wang, Jason D. Lee:
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot. CoRR abs/2009.11094 (2020) - [i49]Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason D. Lee, Sham M. Kakade, Huan Wang, Caiming Xiong:
How Important is the Train-Validation Split in Meta-Learning? CoRR abs/2010.05843 (2020) - [i48]Jiaqi Yang, Wei Hu, Jason D. Lee, Simon S. Du:
Provable Benefits of Representation Learning in Linear Bandits. CoRR abs/2010.06531 (2020) - [i47]Xiang Wang, Chenwei Wu, Jason D. Lee, Tengyu Ma, Rong Ge:
Beyond Lazy Training for Over-parameterized Tensor Decomposition. CoRR abs/2010.11356 (2020)
2010 – 2019
- 2019
- [j6]Jason D. Lee
, Ioannis Panageas, Georgios Piliouras, Max Simchowitz, Michael I. Jordan
, Benjamin Recht:
First-order methods almost always avoid strict saddle points. Math. Program. 176(1-2): 311-337 (2019) - [j5]Mahdi Soltanolkotabi
, Adel Javanmard, Jason D. Lee
:
Theoretical Insights Into the Optimization Landscape of Over-Parameterized Shallow Neural Networks. IEEE Trans. Inf. Theory 65(2): 742-769 (2019) - [c40]Mor Shpigel Nacson, Jason D. Lee, Suriya Gunasekar, Pedro Henrique Pamplona Savarese, Nathan Srebro, Daniel Soudry:
Convergence of Gradient Descent on Separable Data. AISTATS 2019: 3420-3428 - [c39]Simon S. Du, Jason D. Lee, Haochuan Li, Liwei Wang, Xiyu Zhai:
Gradient Descent Finds Global Minima of Deep Neural Networks. ICML 2019: 1675-1685 - [c38]Mor Shpigel Nacson, Suriya Gunasekar, Jason D. Lee, Nathan Srebro, Daniel Soudry:
Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models. ICML 2019: 4683-4692 - [c37]Colin Wei, Jason D. Lee, Qiang Liu, Tengyu Ma:
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel. NeurIPS 2019: 9709-9721 - [c36]Qi Cai, Zhuoran Yang, Jason D. Lee, Zhaoran Wang:
Neural Temporal-Difference Learning Converges to Global Optima. NeurIPS 2019: 11312-11322 - [c35]Ruiqi Gao, Tianle Cai, Haochuan Li, Cho-Jui Hsieh, Liwei Wang, Jason D. Lee:
Convergence of Adversarial Training in Overparametrized Neural Networks. NeurIPS 2019: 13009-13020 - [c34]Maher Nouiehed, Maziar Sanjabi, Tianjian Huang, Jason D. Lee, Meisam Razaviyayn:
Solving a Class of Non-Convex Min-Max Games Using Iterative First Order Methods. NeurIPS 2019: 14905-14916 - [i46]Maher Nouiehed, Maziar Sanjabi, Tianjian Huang, Jason D. Lee, Meisam Razaviyayn:
Solving a Class of Non-Convex Min-Max Games Using Iterative First Order Methods. CoRR abs/1902.08297 (2019) - [i45]Mor Shpigel Nacson, Suriya Gunasekar, Jason D. Lee, Nathan Srebro, Daniel Soudry:
Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models. CoRR abs/1905.07325 (2019) - [i44]Qi Cai, Zhuoran Yang, Jason D. Lee, Zhaoran Wang:
Neural Temporal-Difference Learning Converges to Global Optima. CoRR abs/1905.10027 (2019) - [i43]Ruiqi Gao, Tianle Cai, Haochuan Li, Liwei Wang, Cho-Jui Hsieh, Jason D. Lee:
Convergence of Adversarial Training in Overparametrized Networks. CoRR abs/1906.07916 (2019) - [i42]Xiao Li, Zhihui Zhu, Anthony Man-Cho So, Jason D. Lee:
Incremental Methods for Weakly Convex Optimization. CoRR abs/1907.11687 (2019) - [i41]Alekh Agarwal, Sham M. Kakade, Jason D. Lee, Gaurav Mahajan:
Optimality and Approximation with Policy Gradient Methods in Markov Decision Processes. CoRR abs/1908.00261 (2019) - [i40]Ashok Vardhan Makkuva, Amirhossein Taghvaei, Sewoong Oh, Jason D. Lee:
Optimal transport mapping via input convex neural networks. CoRR abs/1908.10962 (2019) - [i39]Yu Bai, Jason D. Lee:
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks. CoRR abs/1910.01619 (2019) - [i38]Qi Lei, Jason D. Lee, Alexandros G. Dimakis, Constantinos Daskalakis:
SGD Learns One-Layer Networks in WGANs. CoRR abs/1910.07030 (2019) - [i37]Maziar Sanjabi, Sina Baharlouei, Meisam Razaviyayn, Jason D. Lee:
When Does Non-Orthogonal Tensor Decomposition Have No Spurious Local Minima? CoRR abs/1911.09815 (2019) - 2018
- [c33]Rong Ge, Jason D. Lee, Tengyu Ma:
Learning One-hidden-layer Neural Networks with Landscape Design. ICLR (Poster) 2018 - [c32]Simon S. Du, Jason D. Lee, Yuandong Tian:
When is a Convolutional Filter Easy to Learn? ICLR (Poster) 2018 - [c31]Chenwei Wu, Jiajun Luo, Jason D. Lee:
No Spurious Local Minima in a Two Hidden Unit ReLU Network. ICLR (Workshop) 2018 - [c30]Simon S. Du, Jason D. Lee:
On the Power of Over-parametrization in Neural Networks with Quadratic Activation. ICML 2018: 1328-1337 - [c29]Simon S. Du, Jason D. Lee, Yuandong Tian, Aarti Singh, Barnabás Póczos:
Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima. ICML 2018: 1338-1347 - [c28]Suriya Gunasekar, Jason D. Lee, Daniel Soudry, Nathan Srebro:
Characterizing Implicit Bias in Terms of Optimization Geometry. ICML 2018: 1827-1836 - [c27]