Остановите войну!
for scientists:
default search action
Yonatan Bitton
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [i17]Alon Jacovi, Yonatan Bitton, Bernd Bohnet, Jonathan Herzig, Or Honovich, Michael Tseng, Michael Collins, Roee Aharoni, Mor Geva:
A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains. CoRR abs/2402.00559 (2024) - [i16]Oren Sultan, Yonatan Bitton, Ron Yosef, Dafna Shahaf:
ParallelPARC: A Scalable Pipeline for Generating Natural-Language Analogies. CoRR abs/2403.01139 (2024) - 2023
- [c10]Yonatan Bitton, Ron Yosef, Eliyahu Strugo, Dafna Shahaf, Roy Schwartz, Gabriel Stanovsky:
VASR: Visual Analogies of Situation Recognition. AAAI 2023: 241-249 - [c9]Ron Yosef, Yonatan Bitton, Dafna Shahaf:
IRFL: Image Recognition of Figurative Language. EMNLP (Findings) 2023: 1044-1058 - [c8]Yonatan Bitton, Shlomi Cohen-Ganor, Ido Hakimi, Yoad Lewenberg, Roee Aharoni, Enav Weinreb:
q2d: Turning Questions into Dialogs to Teach Models How to Search. EMNLP 2023: 13661-13676 - [c7]Nitzan Bitton Guetta, Yonatan Bitton, Jack Hessel, Ludwig Schmidt, Yuval Elovici, Gabriel Stanovsky, Roy Schwartz:
Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images. ICCV 2023: 2616-2627 - [c6]Yonatan Bitton, Hritik Bansal, Jack Hessel, Rulin Shao, Wanrong Zhu, Anas Awadalla, Josh Gardner, Rohan Taori, Ludwig Schmidt:
VisIT-Bench: A Dynamic Benchmark for Evaluating Instruction-Following Vision-and-Language Models. NeurIPS 2023 - [c5]Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah M. Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander J. Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, Ludwig Schmidt:
DataComp: In search of the next generation of multimodal datasets. NeurIPS 2023 - [c4]Michal Yarom, Yonatan Bitton, Soravit Changpinyo, Roee Aharoni, Jonathan Herzig, Oran Lang, Eran Ofek, Idan Szpektor:
What You See is What You Read? Improving Text-Image Alignment Evaluation. NeurIPS 2023 - [i15]Nitzan Bitton Guetta, Yonatan Bitton, Jack Hessel, Ludwig Schmidt, Yuval Elovici, Gabriel Stanovsky, Roy Schwartz:
Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images. CoRR abs/2303.07274 (2023) - [i14]Ron Yosef, Yonatan Bitton, Dafna Shahaf:
IRFL: Image Recognition of Figurative Language. CoRR abs/2303.15445 (2023) - [i13]Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah M. Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, Ludwig Schmidt:
DataComp: In search of the next generation of multimodal datasets. CoRR abs/2304.14108 (2023) - [i12]Yonatan Bitton, Shlomi Cohen-Ganor, Ido Hakimi, Yoad Lewenberg, Roee Aharoni, Enav Weinreb:
q2d: Turning Questions into Dialogs to Teach Models How to Search. CoRR abs/2304.14318 (2023) - [i11]Michal Yarom, Yonatan Bitton, Soravit Changpinyo, Roee Aharoni, Jonathan Herzig, Oran Lang, Eran Ofek, Idan Szpektor:
What You See is What You Read? Improving Text-Image Alignment Evaluation. CoRR abs/2305.10400 (2023) - [i10]Rodrigo Valerio, João Bordalo, Michal Yarom, Yonatan Bitton, Idan Szpektor, João Magalhães:
Transferring Visual Attributes from Natural Language to Verified Image Generation. CoRR abs/2305.15026 (2023) - [i9]Netta Madvil, Yonatan Bitton, Roy Schwartz:
Read, Look or Listen? What's Needed for Solving a Multimodal Dataset. CoRR abs/2307.04532 (2023) - [i8]Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Yitzhak Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt:
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models. CoRR abs/2308.01390 (2023) - [i7]Yonatan Bitton, Hritik Bansal, Jack Hessel, Rulin Shao, Wanrong Zhu, Anas Awadalla, Josh Gardner, Rohan Taori, Ludwig Schmidt:
VisIT-Bench: A Benchmark for Vision-Language Instruction Following Inspired by Real-World Use. CoRR abs/2308.06595 (2023) - [i6]Hritik Bansal, Yonatan Bitton, Idan Szpektor, Kai-Wei Chang, Aditya Grover:
VideoCon: Robust Video-Language Alignment via Contrast Captions. CoRR abs/2311.10111 (2023) - [i5]Brian Gordon, Yonatan Bitton, Yonatan Shafir, Roopal Garg, Xi Chen, Dani Lischinski, Daniel Cohen-Or, Idan Szpektor:
Mismatch Quest: Visual and Textual Feedback for Image-Text Misalignment. CoRR abs/2312.03766 (2023) - 2022
- [c3]Yonatan Bitton, Nitzan Bitton Guetta, Ron Yosef, Yuval Elovici, Mohit Bansal, Gabriel Stanovsky, Roy Schwartz:
WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models. NeurIPS 2022 - [i4]Yonatan Bitton, Nitzan Bitton Guetta, Ron Yosef, Yuval Elovici, Mohit Bansal, Gabriel Stanovsky, Roy Schwartz:
WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models. CoRR abs/2207.12576 (2022) - [i3]Yonatan Bitton, Ron Yosef, Eli Strugo, Dafna Shahaf, Roy Schwartz, Gabriel Stanovsky:
VASR: Visual Analogies of Situation Recognition. CoRR abs/2212.04542 (2022) - 2021
- [c2]Yonatan Bitton, Michael Elhadad, Gabriel Stanovsky, Roy Schwartz:
Data Efficient Masked Language Modeling for Vision and Language. EMNLP (Findings) 2021: 3013-3028 - [c1]Yonatan Bitton, Gabriel Stanovsky, Roy Schwartz, Michael Elhadad:
Automatic Generation of Contrast Sets from Scene Graphs: Probing the Compositional Consistency of GQA. NAACL-HLT 2021: 94-105 - [i2]Yonatan Bitton, Gabriel Stanovsky, Roy Schwartz, Michael Elhadad:
Automatic Generation of Contrast Sets from Scene Graphs: Probing the Compositional Consistency of GQA. CoRR abs/2103.09591 (2021) - [i1]Yonatan Bitton, Gabriel Stanovsky, Michael Elhadad, Roy Schwartz:
Data Efficient Masked Language Modeling for Vision and Language. CoRR abs/2109.02040 (2021) - 2020
- [j1]Yonatan Bitton, Raphael Cohen, Tamar Schifter, Eitan Bachmat, Michael Elhadad, Noémie Elhadad:
Cross-lingual Unified Medical Language System entity linking in online health communities. J. Am. Medical Informatics Assoc. 27(10): 1585-1592 (2020)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-04-25 02:02 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint