Contact
Allen Institute for Artificial Intelligence
2157 N Northlake Way Suite 110
Seattle, WA 98103
Email: marky at allenai dot org

Looking for a post doc position for 2019-2020
CV Research Statement
Mark Yatskar

I was a PhD student at University of Washington co-advised by Luke Zettlemoyer and Ali Farhadi and have since moved to AI2 in the Young Investigator Program. Prior to UW, I worked with Lillian Lee at Cornell on language simplification. My interests are in the intersection of natural language processing and computer vision as well as fairness in computing. I use the structure of language to help design computer vision systems, for example, Situation Recognition (demo), for modeling how objects are interacting in events using semantic roles. Sometimes, I like to work on pure NLP and I have recently released a new dataset for multi-turn information seeking dialogs from documents, QuAC. With AI systems getting better and being more broadly applied, its important to think about how the models might treat people, in similar circumstances, differently. Unfortunately, I've found some systems I built behave differently for men and women Machines Taught By Photos Learn a Sexist View of Women (and systems many others have built) but I've been actively researching how to do better without sacrificing accuracy.

News

  • 6/1/19 - Accepted a tenure track faculty position @ UPenn, starting Fall 2020
  • 11/19/18 - Giving a talk at Naver on Language and Vision
  • 9/18/18 - Giving a talk at OSU CSE on Language As a Scaffold for Grounded Intelligence
  • 6/18/18 - Giving a talk at CVPR in the Language and Vision Workshop.
  • 6/5/18 - Yonatan Bisk, Omer Levy and I are organizing a NAACL workshop Generalization in the Age of Deep Learning.
  • 6/3/18 - Going to be on a panel @ NAACL about Ethics in NLP
  • 4/20/18 - At ISI and UCLA giving talk about imSitu and Neural Motifs.
  • 9/21/17 - "Men Also Like Shopping" featured in a Wired article Machines Taught By Photos Learn a Sexist View of Women
  • 9/17/17 - "Men Also Like Shopping" got the Best Long Paper Award at EMNLP.
  • 7/7/17 - I am graduating this summer amd I will be joining AI2 Young Investigator program as a post-doc.
  • 6/15/17 - A port of Caffe imSitu code to PyTorch has been release on github
  • 5/11/17 - Recieved a Young Investigator award from Allen Institute for Artificial Intelligence.
  • 9/20/16 - imSitu has been profiled by New York Times in article Computer Vision: On the Way to Seeing More

Publications

Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations
[Paper][Bib]
Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, Vicente Ordóñez
In International Conference on Computer Vision (ICCV), 2019

A Qualitative Comparison of CoQA, Squad 2.0, and QuAC
[Paper] [Bib] [Website]
Mark Yatskar
In North American Chapter of Association of Computational Linguistics (NAACL), 2019

QuAC: Question Answering in Context
[Paper] [Bib] [Website]
Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer
In Empirical Methods in Natural Language Processing (EMNLP), 2018

Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
[Paper] [Bib]
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordóñez, Kai-Wei Chang
In North American Chapter of Association for Computational Linguistics (NAACL), 2018

Neural Motifs: Scene Graph Parsing with Global Context
[Paper] [Project Page ] [Code] [Bib]
Rowan Zellers, Mark Yatskar, Sam Thomson, Yejin Choi
In Computer Vision and Pattern Recognition (CVPR), 2018

Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
( Best Long Paper Award ) [Paper] [Bib] [Talk] [Code]
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordóñez, Kai-Wei Chang
In Empirical Methods in Natural Language Processing (EMNLP), 2017
Press: Wired: Machines Taught By Photos Learn a Sexist View of Women

Commonly Uncommon: Semantic Sparsity in Situation Recognition
[Paper] [Bib] [Demo]
Mark Yatskar, Vicente Ordóñez, Luke Zettlemoyer, Ali Farhadi
In Computer Vision and Pattern Recognition (CVPR), 2017

Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
[Paper] [Bib] [Demo] [Code]
Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, Luke Zettlemoyer
In Association for Computational Linguistics (ACL), 2017

Situation Recognition: Visual Semantic Role Labeling for Image Understanding
[Paper] [Bib] [Supplemental Material][Slides] [Data] [Browse] [Demo] [Code]
Mark Yatskar, Luke Zettlemoyer, Ali Farhadi
In Computer Vision and Pattern Recognition (CVPR), 2016 Oral
Press: New York Times: Computer Vision: On the Way to Seeing More

Stating the Obvious: Extracting Visual Common Sense Knowledge
[Paper] [Bib]
Mark Yatskar, Vicente Ordóñez, Ali Farhadi
In North American Chapter of Association for Computational Linguistics (NAACL), 2016

See No Evil, Say No Evil: Description Generation from Densely Labeled Images
[Paper] [Bib] [Data] [Captions] [Output]
Mark Yatskar, Michel Galley, Lucy Vanderwende, and Luke Zettlemoyer
In Third Joint Conference on Lexical and Computation Semantics (*SEM), 2014

Learning to Relate Literal and Sentimental Descriptions of Visual Properties
[Paper] [Bib] [Data]
Mark Yatskar, Svitlana Volkova, Asli Celikyilmaz, Bill Dolan, and Luke Zettlemoyer
In North American Chapter of the Association for Computational Linguistics (NAACL), 2013

For the sake of simplicity: Unsupervised extraction of lexical simplifications from Wikipedia
[Paper] [Bib] [Data]
Mark Yatskar, Bo Pang, Cristian Danescu-Niculescu-Mizil, Lillian Lee
In North American Chapter of the Association for Computation Linguistics (NAACL), 2010