default search action
MLSys 2020: Austin, TX, USA
- Inderjit S. Dhillon, Dimitris S. Papailiopoulos, Vivienne Sze:
Proceedings of the Third Conference on Machine Learning and Systems, MLSys 2020, Austin, TX, USA, March 2-4, 2020. mlsys.org 2020 - Xiaotang Jiang, Huan Wang, Yiliu Chen, Ziqi Wu, Lichuan Wang, Bin Zou, Yafeng Yang, Zongyang Cui, Yu Cai, Tianhang Yu, Chengfei Lyu, Zhihua Wu:
MNN: A Universal and Efficient Inference Engine. - Javier Fernández-Marqués, Paul N. Whatmough, Andrew Mundy, Matthew Mattina:
Searching for Winograd-aware Quantized Networks. - Yu Wang, Gu-Yeon Wei, David Brooks:
A Systematic Methodology for Analysis of Deep Learning Hardware and Software Platforms. - Byung Hoon Ahn, Jinwon Lee, Jamie Menjay Lin, Hsin-Pai Cheng, Jilei Hou, Hadi Esmaeilzadeh:
Ordering Chaos: Memory-Aware Scheduling of Irregularly Wired Neural Networks for Edge Devices. - Mike Innes:
Sense & Sensitivities: The Path to General-Purpose Algorithmic Differentiation. - Ameer Haj-Ali, Qijing (Jenny) Huang, William S. Moses, John Xiang, Krste Asanovic, John Wawrzynek, Ion Stoica:
AutoPhase: Juggling HLS Phase Orderings in Random Forests with Deep Reinforcement Learning. - Liang Luo, Peter West, Arvind Krishnamurthy, Luis Ceze, Jacob Nelson:
PLink: Discovering and Exploiting Locality for Accelerated Distributed Training on the public Cloud. - Peifeng Yu, Mosharaf Chowdhury:
Fine-Grained GPU Sharing Primitives for Deep Learning Applications. - Sambhav R. Jain, Albert Gural, Michael Wu, Chris Dick:
Trained Quantization Thresholds for Accurate and Efficient Fixed-Point Inference of Deep Neural Networks. - Davis W. Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, John V. Guttag:
What is the State of Neural Network Pruning? - Peter Kraft, Daniel Kang, Deepak Narayanan, Shoumik Palkar, Peter Bailis, Matei Zaharia:
Willump: A Statistically-Aware End-to-end Optimizer for Machine Learning Inference. - Sivakumar Chidambaram, Pierre Langlois, Jean-Pierre David:
PoET-BiN: Power Efficient Tiny Binary Neurons. - Guanhua Wang, Shivaram Venkataraman, Amar Phanishayee, Jorgen Thelin, Nikhil R. Devanur, Ion Stoica:
Blink: Fast and Generic Collectives for Distributed ML. - Zhihao Jia, Sina Lin, Mingyu Gao, Matei Zaharia, Alex Aiken:
Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc. - Abdul Wasay, Brian Hentschel, Yuze Liao, Sanyuan Chen, Stratos Idreos:
MotherNets: Rapid Deep Ensemble Learning. - Xiaofan Zhang, Haoming Lu, Cong Hao, Jiachen Li, Bowen Cheng, Yuhong Li, Kyle Rupnow, Jinjun Xiong, Thomas S. Huang, Honghui Shi, Wen-Mei Hwu, Deming Chen:
SkyNet: a Hardware-Efficient Method for Object Detection and Tracking on Embedded Systems. - Liam Li, Kevin G. Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Jonathan Ben-tzur, Moritz Hardt, Benjamin Recht, Ameet Talwalkar:
A System for Massively Parallel Hyperparameter Tuning. - Hui Guan, Laxmikant Kishor Mokadam, Xipeng Shen, Seung-Hwan Lim, Robert M. Patton:
FLEET: Flexible Efficient Ensemble Training for Heterogeneous Deep Neural Networks. - Megan Leszczynski, Avner May, Jian Zhang, Sen Wu, Christopher R. Aberger, Christopher Ré:
Understanding the Downstream Instability of Word Embeddings. - Beidi Chen, Tharun Medini, James Farwell, Sameh Gobriel, Tsung-Yuan Charlie Tai, Anshumali Shrivastava:
SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems. - Richard Wu, Aoqian Zhang, Ihab F. Ilyas, Theodoros Rekatsinas:
Attention-based Learning for Missing Data Imputation in HoloClean. - Manuele Rusci, Alessandro Capotondi, Luca Benini:
Memory-Driven Mixed Low Precision Quantization for Enabling Deep Network Inference on Microcontrollers. - Peter Mattson, Christine Cheng, Gregory F. Diamos, Cody Coleman, Paulius Micikevicius, David A. Patterson, Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debo Dutta, Udit Gupta, Kim M. Hazelwood, Andy Hock, Xinyuan Huang, Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Deepak Narayanan, Tayo Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St. John, Carole-Jean Wu, Lingjie Xu, Cliff Young, Matei Zaharia:
MLPerf Training Benchmark. - Mohammad Malekzadeh, Dimitrios Athanasakis, Hamed Haddadi, Benjamin Livshits:
Privacy-Preserving Bandits. - Junki Park, Hyunsung Yoon, Daehyun Ahn, Jungwook Choi, Jae-Joon Kim:
OPTIMUS: OPTImized matrix MUltiplication Structure for Transformer neural network accelerator. - Joshua Fromm, Meghan Cowan, Matthai Philipose, Luis Ceze, Shwetak N. Patel:
Riptide: Fast End-to-End Binarized Neural Networks. - Alexey Radul, Brian Patton, Dougal Maclaurin, Matthew D. Hoffman, Rif A. Saurous:
Automatically batching control-intensive programs for modern accelerators. - Andrew Or, Haoyu Zhang, Michael J. Freedman:
Resource Elasticity in Distributed Deep Learning. - Weijie Zhao, Deping Xie, Ronglai Jia, Yulei Qian, Ruiquan Ding, Mingming Sun, Ping Li:
Distributed Hierarchical GPU Parameter Server for Massive Scale Deep Learning Ads Systems. - Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith:
Federated Optimization in Heterogeneous Networks. - Shang Wang, Yifan Bai, Gennady Pekhimenko:
BPPSA: Scaling Back-propagation by Parallel Scan Algorithm. - Hanson Wang, Zehui Wang, Yuanyuan Ma:
Predictive Precompute with Recurrent Neural Networks. - Daniel Kang, Deepti Raghavan, Peter Bailis, Matei Zaharia:
Model Assertions for Monitoring and Improving ML Models. - Paras Jain, Ajay Jain, Aniruddha Nrusimha, Amir Gholami, Pieter Abbeel, Kurt Keutzer, Ion Stoica, Joseph Gonzalez:
Checkmate: Breaking the Memory Wall with Optimal Tensor Rematerialization.
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.