default search action
36th NeurIPS 2022: New Orleans, LA, USA
- Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, A. Oh:
Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022. 2022, ISBN 9781713871088 - Yucheng Ding, Chaoyue Niu, Fan Wu, Shaojie Tang, Chengfei Lyu, Yanghe Feng, Guihai Chen:
Federated Submodel Optimization for Hot and Cold Data Features. - Xingyu Zhou, Bo Ji:
On Kernelized Multi-Armed Bandits with Constraints. - Seon-Ho Lee, Nyeong-Ho Shin, Chang-Su Kim:
Geometric Order Learning for Rank Estimation. - Changmin Yu, Hugo Soulat, Neil Burgess, Maneesh Sahani:
Structured Recognition for Generative Models with Explaining Away. - Yijian Qin, Ziwei Zhang, Xin Wang, Zeyang Zhang, Wenwu Zhu:
NAS-Bench-Graph: Benchmarking Graph Neural Architecture Search. - Cian Naik, Judith Rousseau, Trevor Campbell:
Fast Bayesian Coresets via Subsampling and Quasi-Newton Refinement. - Steven Stalder, Nathanaël Perraudin, Radhakrishna Achanta, Fernando Pérez-Cruz, Michele Volpi:
What You See is What You Classify: Black Box Attributions. - Martin Klissarov, Rasool Fakoor, Jonas W. Mueller, Kavosh Asadi, Taesup Kim, Alexander J. Smola:
Adaptive Interest for Emphatic Reinforcement Learning. - Dongze Lian, Daquan Zhou, Jiashi Feng, Xinchao Wang:
Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning. - Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, Cordelia Schmid:
Zero-Shot Video Question Answering via Frozen Bidirectional Language Models. - Yinglun Zhu, Robert Nowak:
Active Learning with Neural Networks: Insights from Nonparametric Statistics. - Yufei Guo, Yuanpei Chen, Liwen Zhang, Xiaode Liu, YingLei Wang, Xuhui Huang, Zhe Ma:
IM-Loss: Information Maximization Loss for Spiking Neural Networks. - Sreejan Kumar, Carlos G. Correa, Ishita Dasgupta, Raja Marjieh, Michael Y. Hu, Robert D. Hawkins, Jonathan D. Cohen, Nathaniel D. Daw, Karthik Narasimhan, Tom Griffiths:
Using natural language and program abstractions to instill human inductive biases in machines. - Ruibo Liu, Chenyan Jia, Ge Zhang, Ziyu Zhuang, Tony X. Liu, Soroush Vosoughi:
Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits. - Yezhen Cong, Samar Khanna, Chenlin Meng, Patrick Liu, Erik Rozi, Yutong He, Marshall Burke, David B. Lobell, Stefano Ermon:
SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery. - Mathieu Even, Laurent Massoulié, Kevin Scaman:
On Sample Optimality in Personalized Collaborative and Federated Learning. - Wei-Cheng Tseng, Tsun-Hsuan Johnson Wang, Yen-Chen Lin, Phillip Isola:
Offline Multi-Agent Reinforcement Learning with Knowledge Distillation. - Shuoguang Yang, Xuezhou Zhang, Mengdi Wang:
Decentralized Gossip-Based Stochastic Bilevel Optimization over Communication Networks. - Giulia Denevi, Massimiliano Pontil, Carlo Ciliberto:
Conditional Meta-Learning of Linear Representations. - Peter Lippmann, Enrique Fita Sanmartín, Fred A. Hamprecht:
Theory and Approximate Solvers for Branched Optimal Transport with Multiple Sources. - Shichong Peng, Seyed Alireza Moazenipourasil, Ke Li:
CHIMLE: Conditional Hierarchical IMLE for Multimodal Conditional Image Synthesis. - Hao Lou, Tao Jin, Yue Wu, Pan Xu, Quanquan Gu, Farzad Farnoud:
Active Ranking without Strong Stochastic Transitivity. - Yecheng Jason Ma, Jason Yan, Dinesh Jayaraman, Osbert Bastani:
Offline Goal-Conditioned Reinforcement Learning via $f$-Advantage Regression. - Yiting Chen, Qibing Ren, Junchi Yan:
Rethinking and Improving Robustness of Convolutional Neural Networks: a Shapley Value-based Approach in Frequency Domain. - Zhun Zhong, Yuyang Zhao, Gim Hee Lee, Nicu Sebe:
Adversarial Style Augmentation for Domain Generalized Urban-Scene Segmentation. - Lue Fan, Feng Wang, Naiyan Wang, Zhaoxiang Zhang:
Fully Sparse 3D Object Detection. - Maximilian Augustin, Valentyn Boreiko, Francesco Croce, Matthias Hein:
Diffusion Visual Counterfactual Explanations. - Jingyun Liang, Yuchen Fan, Xiaoyu Xiang, Rakesh Ranjan, Eddy Ilg, Simon Green, Jiezhang Cao, Kai Zhang, Radu Timofte, Luc Van Gool:
Recurrent Video Restoration Transformer with Guided Deformable Attention. - Boxiang Wang, Archer Y. Yang:
A Consolidated Cross-Validation Algorithm for Support Vector Machines via Data Reduction. - Nika Haghtalab, Michael I. Jordan, Eric Zhao:
On-Demand Sampling: Learning Optimally from Multiple Distributions. - Konstantin Mishchenko, Francis R. Bach, Mathieu Even, Blake E. Woodworth:
Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays. - Jiaxiang Chen, Qingyuan Yang, Ruomin Huang, Hu Ding:
Coresets for Relational Data and The Applications. - Kaiyang Guo, Yunfeng Shao, Yanhui Geng:
Model-Based Offline Reinforcement Learning with Pessimism-Modulated Dynamics Belief. - Yu Meng, Jiaxin Huang, Yu Zhang, Jiawei Han:
Generating Training Data with Language Models: Towards Zero-Shot Language Understanding. - Florentin Guth, Simon Coste, Valentin De Bortoli, Stéphane Mallat:
Wavelet Score-Based Generative Modeling. - Chen Liu, Ziqi Zhao, Sabine Süsstrunk, Mathieu Salzmann:
Robust Binary Models by Pruning Randomly-initialized Networks. - Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux:
Why do tree-based models still outperform deep learning on typical tabular data? - Yuzhou Cao, Tianchi Cai, Lei Feng, Lihong Gu, Jinjie Gu, Bo An, Gang Niu, Masashi Sugiyama:
Generalizing Consistent Multi-Class Classification with Rejection to be Compatible with Arbitrary Losses. - Vivek F. Farias, Andrew A. Li, Tianyi Peng, Andrew Zheng:
Markovian Interference in Experiments. - Paul Rolland, Luca Viano, Norman Schürhoff, Boris Nikolov, Volkan Cevher:
Identifiability and generalizability from multiple experts in Inverse Reinforcement Learning. - Nikola Surjanovic, Saifuddin Syed, Alexandre Bouchard-Côté, Trevor Campbell:
Parallel Tempering With a Variational Reference. - Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun:
Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems. - Rui Miao, Zhengling Qi, Xiaoke Zhang:
Off-Policy Evaluation for Episodic Partially Observable Markov Decision Processes under Non-Parametric Models. - Chaofei Wang, Qisen Yang, Rui Huang, Shiji Song, Gao Huang:
Efficient Knowledge Distillation from Model Checkpoints. - Teng Xiao, Zhengyu Chen, Zhimeng Guo, Zeyang Zhuang, Suhang Wang:
Decoupled Self-supervised Learning for Graphs. - Lujun Li, Zhe Jin:
Shadow Knowledge Distillation: Bridging Offline and Online Knowledge Transfer. - Limei Wang, Yi Liu, Yuchao Lin, Haoran Liu, Shuiwang Ji:
ComENet: Towards Complete and Efficient Message Passing for 3D Molecular Graphs. - Kaizhi Zheng, Xiaotong Chen, Odest Chadwicke Jenkins, Xin Eric Wang:
VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation. - Jiawei Huang, Li Zhao, Tao Qin, Wei Chen, Nan Jiang, Tie-Yan Liu:
Tiered Reinforcement Learning: Pessimism in the Face of Uncertainty and Constant Regret. - Sarah Sachs, Hédi Hadiji, Tim van Erven, Cristóbal Guzmán:
Between Stochastic and Adversarial Online Convex Optimization: Improved Regret Bounds via Smoothness. - Jiayuan Ye, Reza Shokri:
Differentially Private Learning Needs Hidden State (Or Much Faster Convergence). - Gabriel Cardoso, Sergey Samsonov, Achille Thin, Eric Moulines, Jimmy Olsson:
BR-SNIS: Bias Reduced Self-Normalized Importance Sampling. - Luca Beurer-Kellner, Martin T. Vechev, Laurent Vanbever, Petar Velickovic:
Learning to Configure Computer Networks with Neural Algorithmic Reasoning. - Mingze Wang, Chao Ma:
Early Stage Convergence and Global Convergence of Training Mildly Parameterized Neural Networks. - Balhae Kim, Jungwon Choi, Seanie Lee, Yoonho Lee, Jung-Woo Ha, Juho Lee:
On Divergence Measures for Bayesian Pseudocoresets. - Takeru Miyato, Masanori Koyama, Kenji Fukumizu:
Unsupervised Learning of Equivariant Structure from Sequences. - Pranjal Awasthi, Anqi Mao, Mehryar Mohri, Yutao Zhong:
Multi-Class $H$-Consistency Bounds. - Sameera Ramasinghe, Lachlan E. MacDonald, Simon Lucey:
On the Frequency-bias of Coordinate-MLPs. - Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh:
DC-BENCH: Dataset Condensation Benchmark. - Siyu Jiao, Gengwei Zhang, Shant Navasardyan, Ling Chen, Yao Zhao, Yunchao Wei, Honghui Shi:
Mask Matching Transformer for Few-Shot Segmentation. - Ilai Bistritz, Nicholas Bambos:
Queue Up Your Regrets: Achieving the Dynamic Capacity Region of Multiplayer Bandits. - Wei Dong, Yuting Liang, Ke Yi:
Differentially Private Covariance Revisited. - Pranjal Awasthi, Abhimanyu Das, Weihao Kong, Rajat Sen:
Trimmed Maximum Likelihood Estimation for Robust Generalized Linear Model. - Yuqin Yang, AmirEmad Ghassami, Mohamed S. Nafea, Negar Kiyavash, Kun Zhang, Ilya Shpitser:
Causal Discovery in Linear Latent Variable Models Subject to Measurement Error. - Wenjian Huang, Hao Wang, Jiahao Xia, Chengyan Wang, Jianguo Zhang:
Density-driven Regularization for Out-of-distribution Detection. - Hananeh Aliee, Till Richter, Mikhail Solonin, Ignacio Ibarra, Fabian J. Theis, Niki Kilbertus:
Sparsity in Continuous-Depth Neural Networks. - Bo-Wei Huang, Keng-Te Liao, Chang-Sheng Kao, Shou-De Lin:
Environment Diversification with Multi-head Neural Network for Invariant Learning. - Mitch Hill, Erik Nijkamp, Jonathan Mitchell, Bo Pang, Song-Chun Zhu:
Learning Probabilistic Models from Generator Latent Spaces with Hat EBM. - Yuxin Zhang, Mingbao Lin, Zhihang Lin, Yiting Luo, Ke Li, Fei Chao, Yongjian Wu, Rongrong Ji:
Learning Best Combination for Efficient N: M Sparsity. - Yue Xing, Qifan Song, Guang Cheng:
Why Do Artificially Generated Data Help Adversarial Robustness. - Hongrui Cai, Wanquan Feng, Xuetao Feng, Yan Wang, Juyong Zhang:
Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera. - Jiayang Ren, Kaixun Hua, Yankai Cao:
Global Optimal K-Medoids Clustering of One Million Samples. - Shibo Li, Jeff M. Phillips, Xin Yu, Robert M. Kirby, Shandian Zhe:
Batch Multi-Fidelity Active Learning with Budget Constraints. - Janghyeon Lee, Jongsuk Kim, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee, Junmo Kim:
UniCLIP: Unified Framework for Contrastive Language-Image Pre-training. - Cong Guan, Feng Chen, Lei Yuan, Chenghe Wang, Hao Yin, Zongzhang Zhang, Yang Yu:
Efficient Multi-agent Communication via Self-supervised Information Aggregation. - Ramansh Sharma, Varun Shankar:
Accelerated Training of Physics-Informed Neural Networks (PINNs) using Meshless Discretizations. - Archana Bura, Aria HasanzadeZonuzy, Dileep Kalathil, Srinivas Shakkottai, Jean-François Chamberland:
DOPE: Doubly Optimistic and Pessimistic Exploration for Safe Reinforcement Learning. - Yeoneung Kim, Insoon Yang, Kwang-Sung Jun:
Improved Regret Analysis for Variance-Adaptive Linear Bandits and Horizon-Free Linear Mixture MDPs. - Zhuoqing Song, Weijian Li, Kexin Jin, Lei Shi, Ming Yan, Wotao Yin, Kun Yuan:
Communication-Efficient Topologies for Decentralized Learning with $O(1)$ Consensus Rate. - Biru Zhu, Yujia Qin, Ganqu Cui, Yangyi Chen, Weilin Zhao, Chong Fu, Yangdong Deng, Zhiyuan Liu, Jingang Wang, Wei Wu, Maosong Sun, Ming Gu:
Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models. - Songhua Liu, Kai Wang, Xingyi Yang, Jingwen Ye, Xinchao Wang:
Dataset Distillation via Factorization. - Ziping Xu, Eunjae Shim, Ambuj Tewari, Paul M. Zimmerman:
Adaptive Sampling for Discovery. - Lixin Zou, Haitao Mao, Xiaokai Chu, Jiliang Tang, Wenwen Ye, Shuaiqiang Wang, Dawei Yin:
A Large Scale Search Dataset for Unbiased Learning to Rank. - Meng-Hao Guo, Cheng-Ze Lu, Qibin Hou, Zhengning Liu, Ming-Ming Cheng, Shi-Min Hu:
SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation. - Tao Yu, Yichi Zhang, Zhiru Zhang, Christopher De Sa:
Understanding Hyperdimensional Computing for Parallel Single-Pass Learning. - Qin Ding, Yue Kang, Yi-Wei Liu, Thomas Chun Man Lee, Cho-Jui Hsieh, James Sharpnack:
Syndicated Bandits: A Framework for Auto Tuning Hyper-parameters in Contextual Bandit Algorithms. - Neil Mallinar, James B. Simon, Amirhesam Abedsoltan, Parthe Pandit, Misha Belkin, Preetum Nakkiran:
Benign, Tempered, or Catastrophic: Toward a Refined Taxonomy of Overfitting. - Yuanhao Ban, Yinpeng Dong:
Pre-trained Adversarial Perturbations. - Jinkun Cao, Ruiqian Nai, Qing Yang, Jialei Huang, Yang Gao:
An Empirical Study on Disentanglement of Negative-free Contrastive Learning. - Mo Tiwari, Ryan Kang, Jaeyong Lee, Chris Piech, Ilan Shomorony, Sebastian Thrun, Martin J. Zhang:
MABSplit: Faster Forest Training Using Multi-Armed Bandits. - Aoqi Zuo, Susan Wei, Tongliang Liu, Bo Han, Kun Zhang, Mingming Gong:
Counterfactual Fairness with Partially Known Causal Graph. - Jose Gallego-Posada, Juan Ramirez, Akram Erraqabi, Yoshua Bengio, Simon Lacoste-Julien:
Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints. - Rishi Saket:
Algorithms and Hardness for Learning Linear Thresholds from Label Proportions. - Luca Pinchetti, Tommaso Salvatori, Yordan Yordanov, Beren Millidge, Yuhang Song, Thomas Lukasiewicz:
Predictive Coding beyond Gaussian Distributions. - Nived Rajaraman, Devvrit, Pranjal Awasthi:
Semi-supervised Active Linear Regression. - Avrim Blum, Omar Montasser, Greg Shakhnarovich, Hongyang Zhang:
Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness. - Sanket Shah, Kai Wang, Bryan Wilder, Andrew Perrault, Milind Tambe:
Decision-Focused Learning without Decision-Making: Learning Locally Optimized Decision Losses. - Cristopher Salvi, Maud Lemercier, Andris Gerasimovics:
Neural Stochastic PDEs: Resolution-Invariant Learning of Continuous Spatiotemporal Dynamics. - Myles Bartlett, Sara Romiti, Viktoriia Sharmanska, Novi Quadrianto:
Okapi: Generalising Better by Making Statistical Matches Match. - Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, Doina Precup:
Revisiting Heterophily For Graph Neural Networks. - Botao Yu, Peiling Lu, Rui Wang, Wei Hu, Xu Tan, Wei Ye, Shikun Zhang, Tao Qin, Tie-Yan Liu:
Museformer: Transformer with Fine- and Coarse-Grained Attention for Music Generation. - Mathieu Rita, Corentin Tallec, Paul Michel, Jean-Bastien Grill, Olivier Pietquin, Emmanuel Dupoux, Florian Strub:
Emergent Communication: Generalization and Overfitting in Lewis Games. - Haoli Bai, Lu Hou, Lifeng Shang, Xin Jiang, Irwin King, Michael R. Lyu:
Towards Efficient Post-training Quantization of Pre-trained Language Models. - Shubhanshu Mishra, Aman Saini, Raheleh Makki, Sneha Mehta, Aria Haghighi, Ali Mollahosseini:
TweetNERD - End to End Entity Linking Benchmark for Tweets. - Sung Woo Park, Hyomin Kim, Kyungjae Lee, Junseok Kwon:
Riemannian Neural SDE: Learning Stochastic Representations on Manifolds. - Marta R. Costa-jussà, Christine Basta, Oriol Domingo, André Rubungo:
OccGen: Selection of Real-world Multilingual Parallel Data Balanced in Gender within Occupations. - Noah Golowich, Ankur Moitra, Dhruv Rohatgi:
Learning in Observable POMDPs, without Computationally Intractable Oracles. - Tingliang Feng, Wei Feng, Weiqi Li, Di Lin:
Cross-Image Context for Single Image Inpainting. - Sravanti Addepalli, Samyak Jain, Venkatesh Babu R.:
Efficient and Effective Augmentation Strategy for Adversarial Training. - Tongda Xu, Yan Wang, Dailan He, Chenjian Gao, Han Gao, Kunzan Liu, Hongwei Qin:
Multi-Sample Training for Neural Image Compression. - Yifan Yang, Yang Liu, Parinaz Naghizadeh:
Adaptive Data Debiasing through Bounded Exploration. - Manzil Zaheer, Kenneth Marino, Will Grathwohl, John Schultz, Wendy Shang, Sheila Babayan, Arun Ahuja, Ishita Dasgupta, Christine Kaeser-Chen, Rob Fergus:
Learning to Navigate Wikipedia by Taking Random Walks. - David Brandfonbrener, Alberto Bietti, Jacob Buckman, Romain Laroche, Joan Bruna:
When does return-conditioned supervised learning work for offline reinforcement learning? - Qi Lyu, Xiao Fu:
Provable Subspace Identification Under Post-Nonlinear Mixtures. - Wenqi Yang, Guanying Chen, Chaofeng Chen, Zhenfang Chen, Kwan-Yee K. Wong:
S3-NeRF: Neural Reflectance Field from Shading and Shadow under a Single Viewpoint. - Arindam Ghosh, Thomas Schaaf, Matthew R. Gormley:
AdaFocal: Calibration-aware Adaptive Focal Loss.