default search action
Daniel Jurafsky
Dan Jurafsky
Person information
- affiliation: Stanford University, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j31]Eva Portelance, Michael C. Frank, Dan Jurafsky:
Learning the Meanings of Function Words From Grounded Language Using a Visual Question Answering Model. Cogn. Sci. 48(5) (2024) - [j30]Valentin Hofmann, Pratyusha Ria Kalluri, Dan Jurafsky, Sharese King:
AI generates covertly racist decisions about people based on their dialect. Nat. 633(8028): 147-154 (2024) - [j29]Daniel A. McFarland, David Broska, Vinodkumar Prabhakaran, Dan Jurafsky:
Coming into relations: How communication reveals and persuades relational decisions. Soc. Networks 79: 57-75 (2024) - [c205]Aryaman Arora, Dan Jurafsky, Christopher Potts:
CausalGym: Benchmarking causal interpretability methods on linguistic tasks. ACL (1) 2024: 14638-14663 - [c204]Myra Cheng, Kristina Gligoric, Tiziano Piccardi, Dan Jurafsky:
AnthroScore: A Computational Linguistic Measure of Anthropomorphism. EACL (1) 2024: 807-825 - [c203]Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Röttger, Dan Jurafsky, Tatsunori Hashimoto, James Zou:
Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions. ICLR 2024 - [c202]Garrett Tanzer, Mirac Suzgun, Eline Visser, Dan Jurafsky, Luke Melas-Kyriazi:
A Benchmark for Learning to Translate a New Language from One Grammar Book. ICLR 2024 - [c201]Federico Bianchi, Patrick John Chia, Mert Yüksekgönül, Jacopo Tagliabue, Dan Jurafsky, James Zou:
How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis. ICML 2024 - [c200]Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, Douwe Kiela:
Model Alignment as Prospect Theoretic Optimization. ICML 2024 - [c199]Yiwei Luo, Kristina Gligoric, Dan Jurafsky:
Othering and Low Status Framing of Immigrant Cuisines in US Restaurant Reviews and Large Language Models. ICWSM 2024: 985-998 - [c198]Kristina Gligoric, Myra Cheng, Lucia Zheng, Esin Durmus, Dan Jurafsky:
NLP Systems That Can't Tell Use from Mention Censor Counterspeech, but Teaching the Distinction Helps. NAACL-HLT 2024: 5942-5959 - [c197]Omar Shaikh, Kristina Gligoric, Ashna Khetan, Matthias Gerstgrasser, Diyi Yang, Dan Jurafsky:
Grounding Gaps in Language Model Generations. NAACL-HLT 2024: 6279-6296 - [c196]Nay San, Georgios Paraskevopoulos, Aryaman Arora, Xiluo He, Prabhjot Kaur, Oliver Adams, Dan Jurafsky:
Predicting positive transfer for improved low-resource speech recognition using acoustic pseudo-tokens. SIGTYPE 2024: 100-112 - [i107]Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, Douwe Kiela:
KTO: Model Alignment as Prospect Theoretic Optimization. CoRR abs/2402.01306 (2024) - [i106]Myra Cheng, Kristina Gligoric, Tiziano Piccardi, Dan Jurafsky:
AnthroScore: A Computational Linguistic Measure of Anthropomorphism. CoRR abs/2402.02056 (2024) - [i105]Nay San, Georgios Paraskevopoulos, Aryaman Arora, Xiluo He, Prabhjot Kaur, Oliver Adams, Dan Jurafsky:
Predicting positive transfer for improved low-resource speech recognition using acoustic pseudo-tokens. CoRR abs/2402.02302 (2024) - [i104]Federico Bianchi, Patrick John Chia, Mert Yüksekgönül, Jacopo Tagliabue, Dan Jurafsky, James Zou:
How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis. CoRR abs/2402.05863 (2024) - [i103]Aryaman Arora, Dan Jurafsky, Christopher Potts:
CausalGym: Benchmarking causal interpretability methods on linguistic tasks. CoRR abs/2402.12560 (2024) - [i102]Valentin Hofmann, Pratyusha Ria Kalluri, Dan Jurafsky, Sharese King:
Dialect prejudice predicts AI decisions about people's character, employability, and criminality. CoRR abs/2403.00742 (2024) - [i101]Kristina Gligoric, Myra Cheng, Lucia Zheng, Esin Durmus, Dan Jurafsky:
NLP Systems That Can't Tell Use from Mention Censor Counterspeech, but Teaching the Distinction Helps. CoRR abs/2404.01651 (2024) - [i100]Zhengxuan Wu, Aryaman Arora, Zheng Wang, Atticus Geiger, Dan Jurafsky, Christopher D. Manning, Christopher Potts:
ReFT: Representation Finetuning for Language Models. CoRR abs/2404.03592 (2024) - [i99]Jiatong Shi, Shih-Heng Wang, William Chen, Martijn Bartelds, Vanya Bannihatti Kumar, Jinchuan Tian, Xuankai Chang, Dan Jurafsky, Karen Livescu, Hung-yi Lee, Shinji Watanabe:
ML-SUPERB 2.0: Benchmarking Multilingual Speech Models Across Modeling Constraints, Languages, and Datasets. CoRR abs/2406.08641 (2024) - [i98]Kaitlyn Zhou, Jena D. Hwang, Xiang Ren, Nouha Dziri, Dan Jurafsky, Maarten Sap:
Rel-A.I.: An Interaction-Centered Approach To Measuring Human-LM Reliance. CoRR abs/2407.07950 (2024) - [i97]Heidi C. Zhang, Shabnam Behzad, Kawin Ethayarajh, Dan Jurafsky:
Data Checklist: On Unit-Testing Datasets with Usable Information. CoRR abs/2408.02919 (2024) - [i96]Moussa Koulako Bala Doumbouya, Ananjan Nandi, Gabriel Poesia, Davide Ghilardi, Anna Goldie, Federico Bianchi, Dan Jurafsky, Christopher D. Manning:
h4rm3l: A Dynamic Benchmark of Composable Jailbreak Attacks for LLM Safety Assessment. CoRR abs/2408.04811 (2024) - [i95]Antón de la Fuente, Dan Jurafsky:
A layer-wise analysis of Mandarin and English suprasegmentals in SSL speech models. CoRR abs/2408.13678 (2024) - [i94]Kristina Gligoric, Tijana Zrnic, Cinoo Lee, Emmanuel J. Candès, Dan Jurafsky:
Can Unconfident LLM Annotations Be Used for Confident Conclusions? CoRR abs/2408.15204 (2024) - 2023
- [j28]Peter Henderson, Xuechen Li, Dan Jurafsky, Tatsunori Hashimoto, Mark A. Lemley, Percy Liang:
Foundation Models and Fair Use. J. Mach. Learn. Res. 24: 400:1-400:79 (2023) - [c195]Martijn Bartelds, Nay San, Bradley McDonnell, Dan Jurafsky, Martijn Wieling:
Making More of Little Data: Improving Low-Resource Automatic Speech Recognition Using Data Augmentation. ACL (1) 2023: 715-729 - [c194]Myra Cheng, Esin Durmus, Dan Jurafsky:
Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models. ACL (1) 2023: 1504-1532 - [c193]Mirac Suzgun, Luke Melas-Kyriazi, Dan Jurafsky:
Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding. ACL (Findings) 2023: 4265-4293 - [c192]Peter Henderson, Eric Mitchell, Christopher D. Manning, Dan Jurafsky, Chelsea Finn:
Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of Foundation Models. AIES 2023: 287-296 - [c191]Isabel Papadimitriou, Kezia Lopez, Dan Jurafsky:
Multilingual BERT has an accent: Evaluating English influences on fluency in multilingual models. EACL (Findings) 2023: 1164-1170 - [c190]Tolúlopé Ògúnrèmí, Dan Jurafsky, Christopher D. Manning:
Mini But Mighty: Efficient Multilingual Pretraining with Linguistically-Informed Data Selection. EACL (Findings) 2023: 1221-1236 - [c189]Faisal Ladhak, Esin Durmus, Mirac Suzgun, Tianyi Zhang, Dan Jurafsky, Kathleen R. McKeown, Tatsunori Hashimoto:
When Do Pre-Training Biases Propagate to Downstream Tasks? A Case Study in Text Summarization. EACL 2023: 3198-3211 - [c188]Kaitlyn Zhou, Dan Jurafsky, Tatsunori Hashimoto:
Navigating the Grey Area: How Expressions of Uncertainty and Overconfidence Affect Language Models. EMNLP 2023: 5506-5524 - [c187]Isabel Papadimitriou, Dan Jurafsky:
Injecting structural hints: Using language models to study inductive biases in language learning. EMNLP (Findings) 2023: 8402-8413 - [c186]Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin Caliskan:
Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale. FAccT 2023: 1493-1504 - [c185]Mert Yüksekgönül, Federico Bianchi, Pratyusha Kalluri, Dan Jurafsky, James Zou:
When and Why Vision-Language Models Behave like Bags-Of-Words, and What to Do About It? ICLR 2023 - [c184]Anjalie Field, Prateek Verma, Nay San, Jennifer L. Eberhardt, Dan Jurafsky:
Developing Speech Processing Pipelines for Police Accountability. INTERSPEECH 2023: 1229-1233 - [c183]Connor Toups, Rishi Bommasani, Kathleen Creel, Sarah H. Bana, Dan Jurafsky, Percy Liang:
Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes. NeurIPS 2023 - [i93]Nay San, Martijn Bartelds, Blaine Billings, Ella de Falco, Hendi Feriza, Johan Safri, Wawan Sahrozi, Ben Foley, Bradley McDonnell, Dan Jurafsky:
Leveraging supplementary text data to kick-start automatic speech recognition system development with limited transcriptions. CoRR abs/2302.04975 (2023) - [i92]Kaitlyn Zhou, Dan Jurafsky, Tatsunori Hashimoto:
Navigating the Grey Area: Expressions of Overconfidence and Uncertainty in Language Models. CoRR abs/2302.13439 (2023) - [i91]Peter Henderson, Xuechen Li, Dan Jurafsky, Tatsunori Hashimoto, Mark A. Lemley, Percy Liang:
Foundation Models and Fair Use. CoRR abs/2303.15715 (2023) - [i90]Isabel Papadimitriou, Dan Jurafsky:
Pretrain on just structure: Understanding linguistic inductive biases using transfer learning. CoRR abs/2304.13060 (2023) - [i89]Mirac Suzgun, Stuart M. Shieber, Dan Jurafsky:
string2string: A Modern Python Library for String-to-String Algorithms. CoRR abs/2304.14395 (2023) - [i88]Martijn Bartelds, Nay San, Bradley McDonnell, Dan Jurafsky, Martijn Wieling:
Making More of Little Data: Improving Low-Resource Automatic Speech Recognition Using Data Augmentation. CoRR abs/2305.10951 (2023) - [i87]Myra Cheng, Esin Durmus, Dan Jurafsky:
Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models. CoRR abs/2305.18189 (2023) - [i86]Anjalie Field, Prateek Verma, Nay San, Jennifer L. Eberhardt, Dan Jurafsky:
Developing Speech Processing Pipelines for Police Accountability. CoRR abs/2306.06086 (2023) - [i85]Connor Toups, Rishi Bommasani, Kathleen A. Creel, Sarah H. Bana, Dan Jurafsky, Percy Liang:
Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes. CoRR abs/2307.05862 (2023) - [i84]Yiwei Luo, Kristina Gligoric, Dan Jurafsky:
Othering and low prestige framing of immigrant cuisines in US restaurant reviews and large language models. CoRR abs/2307.07645 (2023) - [i83]Eva Portelance, Michael C. Frank, Dan Jurafsky:
Learning the meanings of function words from grounded language using a visual question answering model. CoRR abs/2308.08628 (2023) - [i82]Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Röttger, Dan Jurafsky, Tatsunori Hashimoto, James Zou:
Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions. CoRR abs/2309.07875 (2023) - [i81]Garrett Tanzer, Mirac Suzgun, Eline Visser, Dan Jurafsky, Luke Melas-Kyriazi:
A Benchmark for Learning to Translate a New Language from One Grammar Book. CoRR abs/2309.16575 (2023) - [i80]Omar Shaikh, Kristina Gligoric, Ashna Khetan, Matthias Gerstgrasser, Diyi Yang, Dan Jurafsky:
Grounding or Guesswork? Large Language Models are Presumptive Grounders. CoRR abs/2311.09144 (2023) - [i79]Tolúlopé Ògúnrèmí, Christopher D. Manning, Dan Jurafsky:
Multilingual self-supervised speech representations improve the speech recognition of low-resource African languages with codeswitching. CoRR abs/2311.15077 (2023) - [i78]Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon M. Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa:
Use large language models to promote equity. CoRR abs/2312.14804 (2023) - 2022
- [c182]Kaitlyn Zhou, Kawin Ethayarajh, Dallas Card, Dan Jurafsky:
Problems with Cosine as a Measure of Embedding Similarity for High Frequency Words. ACL (2) 2022: 401-423 - [c181]Kaitlyn Zhou, Kawin Ethayarajh, Dan Jurafsky:
Richer Countries and Richer Representations. ACL (Findings) 2022: 2074-2085 - [c180]Junshen K. Chen, Dallas Card, Dan Jurafsky:
Modular Domain Adaptation. ACL (Findings) 2022: 3633-3655 - [c179]Mirac Suzgun, Luke Melas-Kyriazi, Dan Jurafsky:
Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Arbitrary Textual Style Transfer with Small Language Models. EMNLP 2022: 2195-2222 - [c178]Kawin Ethayarajh, Dan Jurafsky:
The Authenticity Gap in Human Evaluation. EMNLP 2022: 6056-6070 - [c177]Peter Henderson, Mark S. Krass, Lucia Zheng, Neel Guha, Christopher D. Manning, Dan Jurafsky, Daniel E. Ho:
Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset. NeurIPS 2022 - [c176]Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang:
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? NeurIPS 2022 - [i77]Nay San, Martijn Bartelds, Tolúlopé Ògúnrèmí, Alison Mount, Ruben Thompson, Michael Higgins, Roy Barker, Jane Simpson, Dan Jurafsky:
Automated speech tools for helping communities process restricted-access corpora for language revival efforts. CoRR abs/2204.07272 (2022) - [i76]Junshen K. Chen, Dallas Card, Dan Jurafsky:
Modular Domain Adaptation. CoRR abs/2204.14213 (2022) - [i75]Kaitlyn Zhou, Kawin Ethayarajh, Dallas Card, Dan Jurafsky:
Problems with Cosine as a Measure of Embedding Similarity for High Frequency Words. CoRR abs/2205.05092 (2022) - [i74]Kaitlyn Zhou, Kawin Ethayarajh, Dan Jurafsky:
Richer Countries and Richer Representations. CoRR abs/2205.05093 (2022) - [i73]Mirac Suzgun, Luke Melas-Kyriazi, Dan Jurafsky:
Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Arbitrary Textual Style Transfer with Small Language Models. CoRR abs/2205.11503 (2022) - [i72]Kawin Ethayarajh, Dan Jurafsky:
How Human is Human Evaluation? Improving the Gold Standard for NLG with Utility Theory. CoRR abs/2205.11930 (2022) - [i71]Peter Henderson, Mark S. Krass, Lucia Zheng, Neel Guha, Christopher D. Manning, Dan Jurafsky, Daniel E. Ho:
Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset. CoRR abs/2207.00220 (2022) - [i70]Sterling Alic, Dorottya Demszky, Zid Mancenido, Jing Liu, Heather Hill, Dan Jurafsky:
Computationally Identifying Funneling and Focusing Questions in Classroom Discourse. CoRR abs/2208.04715 (2022) - [i69]Mert Yüksekgönül, Federico Bianchi, Pratyusha Kalluri, Dan Jurafsky, James Zou:
When and why vision-language models behave like bags-of-words, and what to do about it? CoRR abs/2210.01936 (2022) - [i68]Isabel Papadimitriou, Kezia Lopez, Dan Jurafsky:
Multilingual BERT has an accent: Evaluating English influences on fluency in multilingual models. CoRR abs/2210.05619 (2022) - [i67]Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin Caliskan:
Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale. CoRR abs/2211.03759 (2022) - [i66]Mirac Suzgun, Luke Melas-Kyriazi, Dan Jurafsky:
Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding. CoRR abs/2211.07634 (2022) - [i65]Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang:
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? CoRR abs/2211.13972 (2022) - [i64]Eric Mitchell, Peter Henderson, Christopher D. Manning, Dan Jurafsky, Chelsea Finn:
Self-Destructing Models: Increasing the Costs of Harmful Dual Uses in Foundation Models. CoRR abs/2211.14946 (2022) - 2021
- [j27]Michael Hahn, Dan Jurafsky, Richard Futrell:
Sensitivity as a Complexity Measure for Sequence Classification Tasks. Trans. Assoc. Comput. Linguistics 9: 891-908 (2021) - [c175]Kawin Ethayarajh, Dan Jurafsky:
Attention Flows are Shapley Value Explanations. ACL/IJCNLP (2) 2021: 49-54 - [c174]Dorottya Demszky, Jing Liu, Zid Mancenido, Julie Cohen, Heather Hill, Dan Jurafsky, Tatsunori Hashimoto:
Measuring Conversational Uptake: A Case Study on Student-Teacher Interactions. ACL/IJCNLP (1) 2021: 1638-1653 - [c173]Nay San, Martijn Bartelds, Mitchell Browne, Lily Clifford, Fiona Gibson, John Mansfield, David Nash, Jane Simpson, Myfany Turpin, Maria Vollmer, Sasha Wilmoth, Dan Jurafsky:
Leveraging Pre-Trained Representations to Improve Access to Untranscribed Speech from Endangered Languages. ASRU 2021: 1094-1101 - [c172]Matthew Louis Mauriello, Thierry Lincoln, Grace Hon, Dorien Simon, Dan Jurafsky, Pablo Paredes:
SAD: A Stress Annotated Dataset for Recognizing Everyday Stressors in SMS-like Conversational Systems. CHI Extended Abstracts 2021: 399:1-399:7 - [c171]Eva Portelance, Michael C. Frank, Dan Jurafsky, Alessandro Sordoni, Romain Laroche:
The Emergence of the Shape Bias Results from Communicative Efficiency. CoNLL 2021: 607-623 - [c170]William Held, Dan Iter, Dan Jurafsky:
Focus on what matters: Applying Discourse Coherence Theory to Cross Document Coreference. EMNLP (1) 2021: 1406-1417 - [c169]Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Nearest Neighbor Machine Translation. ICLR 2021 - [c168]Reid Pryzant, Dallas Card, Dan Jurafsky, Victor Veitch, Dhanya Sridhar:
Causal Effects of Linguistic Properties. NAACL-HLT 2021: 4095-4109 - [c167]Yasuhide Miura, Yuhao Zhang, Emily Bao Tsai, Curtis P. Langlotz, Dan Jurafsky:
Improving Factual Completeness and Consistency of Image-to-Text Radiology Report Generation. NAACL-HLT 2021: 5288-5304 - [i63]Nay San, Martijn Bartelds, Mitchell Browne, Lily Clifford, Fiona Gibson, John Mansfield, David Nash, Jane Simpson, Myfany Turpin, Maria Vollmer, Sasha Wilmoth, Dan Jurafsky:
Leveraging neural representations for facilitating access to untranscribed speech from endangered languages. CoRR abs/2103.14583 (2021) - [i62]Kaitlyn Zhou, Kawin Ethayarajh, Dan Jurafsky:
Frequency-based Distortions in Contextualized Word Embeddings. CoRR abs/2104.08465 (2021) - [i61]Michael Hahn, Dan Jurafsky, Richard Futrell:
Sensitivity as a Complexity Measure for Sequence Classification Tasks. CoRR abs/2104.10343 (2021) - [i60]Kawin Ethayarajh, Dan Jurafsky:
Attention Flows are Shapley Value Explanations. CoRR abs/2105.14652 (2021) - [i59]Dorottya Demszky, Jing Liu, Zid Mancenido, Julie Cohen, Heather Hill, Dan Jurafsky, Tatsunori Hashimoto:
Measuring Conversational Uptake: A Case Study on Student-Teacher Interactions. CoRR abs/2106.03873 (2021) - [i58]Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ B. Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S. Chen, Kathleen Creel, Jared Quincy Davis, Dorottya Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, et al.:
On the Opportunities and Risks of Foundation Models. CoRR abs/2108.07258 (2021) - [i57]Eva Portelance, Michael C. Frank, Dan Jurafsky, Alessandro Sordoni, Romain Laroche:
The Emergence of the Shape Bias Results from Communicative Efficiency. CoRR abs/2109.06232 (2021) - [i56]William Held, Dan Iter, Dan Jurafsky:
Focus on what matters: Applying Discourse Coherence Theory to Cross Document Coreference. CoRR abs/2110.05362 (2021) - 2020
- [j26]Julia Mendelsohn, Yulia Tsvetkov, Dan Jurafsky:
A Framework for the Computational Linguistic Analysis of Dehumanization. Frontiers Artif. Intell. 3: 55 (2020) - [j25]Adam S. Miner, Albert Haque, Jason A. Fries, Scott L. Fleming, Denise E. Wilfley, G. Terence Wilson, Arnold Milstein, Dan Jurafsky, Bruce A. Arnow, W. Stewart Agras, Li Fei-Fei, Nigam H. Shah:
Assessing the accuracy of automatic speech recognition for psychotherapy. npj Digit. Medicine 3 (2020) - [j24]Allison Koenecke, Andrew Nam, Emily Lake, Joe Nudell, Minnie Quartey, Zion Mengesha, Connor Toups, John R. Rickford, Dan Jurafsky, Sharad Goel:
Racial disparities in automated speech recognition. Proc. Natl. Acad. Sci. USA 117(14): 7684-7689 (2020) - [c166]Reid Pryzant, Richard Diehl Martinez, Nathan Dass, Sadao Kurohashi, Dan Jurafsky, Diyi Yang:
Automatically Neutralizing Subjective Bias in Text. AAAI 2020: 480-489 - [c165]Dan Iter, Kelvin Guu, Larry Lansing, Dan Jurafsky:
Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models. ACL 2020: 4859-4870 - [c164]Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, Yejin Choi:
Social Bias Frames: Reasoning about Social and Power Implications of Language. ACL 2020: 5477-5490 - [c163]Yiwei Luo, Dallas Card, Dan Jurafsky:
DeSMOG: Detecting Stance in Media On Global Warming. EMNLP (Findings) 2020: 3296-3315 - [c162]Kawin Ethayarajh, Dan Jurafsky:
Utility is in the Eye of the User: A Critique of NLP Leaderboards. EMNLP (1) 2020: 4846-4853 - [c161]Isabel Papadimitriou, Dan Jurafsky:
Learning Music Helps You Read: Using Transfer to Study Linguistic Structure in Language Models. EMNLP (1) 2020: 6829-6839 - [c160]Dallas Card, Peter Henderson, Urvashi Khandelwal, Robin Jia, Kyle Mahowald, Dan Jurafsky:
With Little Power Comes Great Responsibility. EMNLP (1) 2020: 9263-9274 - [c159]Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Generalization through Memorization: Nearest Neighbor Language Models. ICLR 2020 - [c158]Alex Tamkin, Dan Jurafsky, Noah D. Goodman:
Language Through a Prism: A Spectral Approach for Multiscale Language Representations. NeurIPS 2020 - [e4]Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel R. Tetreault:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020. Association for Computational Linguistics 2020, ISBN 978-1-952148-25-5 [contents] - [i55]Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, Joelle Pineau:
Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning. CoRR abs/2002.05651 (2020) - [i54]Julia Mendelsohn, Yulia Tsvetkov, Dan Jurafsky:
A Framework for the Computational Linguistic Analysis of Dehumanization. CoRR