"Towards Efficient Post-training Quantization of Pre-trained Language Models."

Haoli Bai et al. (2022)

Details and statistics

DOI:

access: open

type: Conference or Workshop Paper

metadata version: 2024-01-08

a service of  Schloss Dagstuhl - Leibniz Center for Informatics