default search action
EAAMO 2024: San Luis Potosi, Mexico
- Proceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, EAAMO 2024, San Luis Potosi, Mexico, October 29-31, 2024. ACM 2024, ISBN 979-8-4007-1222-7
- Emma Kallina, Jatinder Singh:
Stakeholder Involvement for Responsible AI Development: A Process Framework. 1:1-1:14 - Lena Armstrong, Abbey Liu, Stephen MacNeil, Danaë Metaxa:
The Silicon Ceiling: Auditing GPT's Race and Gender Biases in Hiring. 2:1-2:18 - Soroush Ebadian, Rupert Freeman, Nisarg Shah:
Harm Ratio: A Novel and Versatile Fairness Criterion. 3:1-3:14 - Rachel Hong, William Agnew, Tadayoshi Kohno, Jamie Morgenstern:
Who's in and who's out? A case study of multimodal CLIP-filtering in DataComp. 4:1-4:17 - Richa Rastogi, Thorsten Joachims:
Fairness in Ranking under Disparate Uncertainty. 5:1-5:31 - Rebecca Dorn, Lee Kezar, Fred Morstatter, Kristina Lerman:
Harmful Speech Detection by Language Models Exhibits Gender-Queer Dialect Bias. 6:1-6:12 - Mayra Russo, Mackenzie Jorgensen, Kristen M. Scott, Wendy Xu, Di H. Nguyen, Jessie Finocchiaro, Matthew Olckers:
Bridging Research and Practice Through Conversation: Reflecting on Our Experience. 7:1-7:11 - Jinsook Lee, Emma Harvey, Joyce Zhou, Nikhil Garg, Thorsten Joachims, René F. Kizilcec:
Ending Affirmative Action Harms Diversity Without Improving Academic Merit. 8:1-8:17 - Chinasa T. Okolo, Hongjin Lin:
Explainable AI in Practice: Practitioner Perspectives on AI for Social Good and User Engagement in the Global South. 9:1-9:16 - Mercy Nyamewaa Asiedu, Awa Dieng, Iskandar Haykel, Negar Rostamzadeh, Stephen Pfohl, Chirag Nagpal, Maria Nagawa, Abigail Oppong, Sanmi Koyejo, Katherine A. Heller:
The Case for Globalizing Fairness: A Mixed Methods Study on Colonialism, AI, and Health in Africa. 10:1-10:24 - Eric Justin Liu, Wonyoung So, Peko Hosoi, Catherine D'Ignazio:
Racial Steering by Large Language Models: A Prospective Audit of GPT-4 on Housing Recommendations. 11:1-11:13 - Yanzhe Zhang, Lu Jiang, Greg Turk, Diyi Yang:
Auditing Gender Presentation Differences in Text-to-Image Models. 12:1-12:10 - Sarah H. Cen, Rohan Alur:
From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing. 13:1-13:14
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.