- Mubashara Akhtar
EMNLP 2020: Mentoring sessions, keynotes, new datasets, and take-aways for online conferences
Aktualisiert: 21. Mai 2022
Just like many other conferences this year, EMNLP 2020 was also moved online. Counted among one of the top NLP conferences, the conference received 3559 submission in total – compared to 2,876 (2019), 2,231 (2018) and 1,418 (2017) in the previous three years. 754 submissions were accepted to EMNLP and 520 into Findings of EMNLP (newly introduced sister proceedings), which leads to an acceptance rate of 22.4% for EMNLP and 15.5% for Findings.
Here you can check out the EMNLP proceedings and Findings of EMNLP.
The following graphics give an overview of submissions and acceptance per track (presented by Yulan He from the program chair):
With several keynotes, panels, workshops and tutorials a variety of sessions and topics were present at the conference. In this blog I will present highlights of some selected sessions and my key takeaways.
This post concludes with some tips on how to get most out of online conference (especially for students!) which you can find at the bottom of this blog post.
Newly introduced Datasets/Tasks
EMNLP submissions introduce a number of new datasets and tasks for various NLP topics; here you can find three selections which I found especially interesting:
[Parikh et al., 2020] introduce ToTTo - an open-domain, table-to-text translation dataset. Consisting of more than 120,000 training sample, this dataset is based on Wikipedia tables and utilizes the table, meta data as well as a set of selected table cells to produce a one-sentence, textual description. Moreover, the authors discuss the limitation of previous approaches, which are restricted to certain domains and table schemas only.
[Ladhak et al., 2020] introduce the multi-lingual dataset WikiLingua for evaluating the task of cross-lingual abstractive summarization for 18 different languages. The dataset is created based on WikiHow – a diverse collection of how-to guides for various topics. As it can be seen in the graphic below, authors extracted the article and summary texts based on the structured guides. WikiLingua has approx. 141k English article-summary pairs, followed by Spanish and Portuguese and contains various other languages e.g. Indonesian, Korean, Hindi and Turkish.
[Ning et al., 2020] discuss challenges related to temporal relationships in texts and point out the lack of temporal relationships in reading comprehension benchmarks. The authors introduce “TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions” to address this shortcoming. Based on text snippets extracted from the TempEval3 dataset (UzZaman et al., 2013), 25,000 events were annotated and 21,000 user-generated question-answer pairs created.
The diversity and inclusion chairs organized a variety of initiatives for social interaction beyond the QA- and gather.town-sessions. In general these were:
Group Mentoring Sessions (“closed” as well as open mentoring sessions were offered)
Bird of Feather Sessions (discussions on specific research areas)
Undergraduate Student Panel (undergraduate attendees interested in pursuing graduate degrees)
Affinity Group Socials (formed around non-research focused interests with the aim of promoting socio-cultural inclusion)
I will describe the key points from the mentoring sessions – as I was able to attend three out of four.
Students and early researchers had the opportunity to register beforehand for closed mentoring session where a small group of attendees was assigned to a senior researcher based on interests. I had the possibility to attend a mentoring session with Tal Linzen (and only two fellow attendees – so we had the opportunity to discuss our questions in depth).
I also attended two open mentoring session – one with Michael Roth, Yevgeni Berzak, Marzieh Fadaee and another session with Noah Smith, Mohit Bansal, Julia Hockenmaier and Hannaneh Hajishirzi.
All mentoring sessions were very insightful and provided lots of useful advice, bellow I am outlining briefly my main takeaways from these sessions:
Work-life-balance. Select wisely where you are investing your time into. Before accepting and taking in everything, pause and think how this is valuable for your career. BUT once you agree on something (i.e. a project) be active about it. I found this presentation on balancing different tasks by Emily Bender (recommended by Tal Linzen) very useful regarding this topic: http://faculty.washington.edu/ebender/papers/BalancingTeachingResearch2015.pdf
Picking your research problem. Here I’d like to cite Noah Smith: “Do good work and focus on good questions and the rest will follow (e.g. papers). Pick quality over quantity”. Furthermore, the mentors emphasized the importance of finding an area which is not too crowded. Hannaneh Hajishirzi mentioned an interesting method for prioritizing different research problems/directions: start an Excel sheet and fill in the following ratings - (i) the impact your problem could have, (ii) how excited you are about it, (iii) how excited your supervisor is about the given problem and then prioritize based on these ratings together with your supervisor(s).
Read read read. Advice by Julia Hockenmaier: If you are starting out with a new research direction, read all the papers in this direction, write a literature review of the area and read some PhD thesis which have been written in the past. Mohit Bansal additionally suggested to read broadly and from more divers topics such as Cognitive Science, Computer Vision, core ML, Robotics, etc.
Final advice by Noah Smith: “Be patient with yourself, there is a reason a PhD takes a lot of time - it is a marathon and not a sprint”.
The keynote I enjoyed the most was the first one by Claire Cardie. She gave some very interesting historical insights into Information Extraction (IE), how it evolved over the years and its impact on other (sub-)areas of NLP research.
She started her talk with the MUC-3 task which was the first community wide IE system introduced in 1991. At that time IE was strongly based on statistical part-of-speech taggers; syntactic parsers weren’t widely available and grammars were constructed by hand. She also talked about major progress in IE in the following 15 years and highlighted ML pipelines, joint inference models, joint learning models and (finally) neural network approaches.
The talk was concluded with her thoughts on future directions of IE. Claire emphasized the limitations of current solutions for event understanding (on entity level as well as the task of event co-reference resolution) and that neural methods miss good document level representations. Moreover, she mentioned the importance of language/genre/domain adaption and underlined her statement with the performance losses observed in SOTA NER solutions if one of these settings is changed.
Finally, she highlighted the method of finding new directions of research by looking at old research which can lead to old but hard and insufficiently researched tasks – just like the MUC-3 task.
Another interesting keynote was given by Rich Caruana from Microsoft with the title Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning. He talked about the current trend of interpretable ML models and explaining methods for DNNs. He referenced some interesting papers on this topic e.g. “Do People and Neural Nets Pay Attention to the Same Words: Studying Eye-tracking Data for Non-factoid QA Evaluation” (Bolotova et al., 2020) and “Beyond Accuracy: Behavioral Testing of NLP models with CheckList” (Ribeiro et al., 2020).
Tips for future online conferences
EMNLP 2020 was the first virtual conference I attended and with COLING and NeurIPS just around the corner, these are my takeaways for future online venues:
Preparation. With so many sessions taking place in parallel, almost around the clock – investing some time beforehand to find out which sessions match my interests and are available at a decent time given my time zone was very helpful. As the talks are pre-recorded, I’d suggest to select a small amount of papers your are most interested in and take a look at their recordings/papers so that you are able to participate well during the QA sessions. Creating my own schedule for sessions I am planning to attend before the start of the conference, helped me to make most out of the conference.
Keynotes. Just like it is the case for in-person conferences, the keynotes talks are usually very insightful and provide inspiration, novel ideas, critical thoughts, etc. These were not available as pre-recordings but provided after the conference. BUT I suggest to watch them live – postponing watching the keynote talks to later after the conference could lead to totally missing them.
Networking. In my opinion networking at a virtual conference was not too difficult – instead of going straight up to an experienced researcher and talking to her or him, you can take your time to have a look at their current research and simply chat asking questions regarding some recent publications for example. I made very good experiences doing so – as these researchers replied to my questions and were happy that I reached out to them.
Mentoring Sessions. Finally, I strongly recommend attending the mentoring sessions for students and also other social events which are offered. I benefitted from these sessions and could take a lot of advice and some new contacts with me.
Concluding, EMNLP 2020 was a great virtual-conference experience!