qi−n ai−n . . .qi−1 ai−1qi , and feed it into a bidirectional LSTM encoder, where n is the size of the history to be used. We generate the answer using a LSTM decoder which attends to the encoder states. Additionally, as the answer words are likely to appear in the original passage, we employ a copy mechanism in the decoder which allows to (optionally) copy a word from the passage (See et al., 2017). We call this Pointer-Generator network, PGNet. 5.2
Reading Comprehension Model
The state-of-the-art reading comprehension models for extractive question answering focus on finding a passage span that matches the question best (Seo et al., 2016; Chen et al., 2017; Yu et al., 2018). Since their answers are limited to spans, they cannot handle questions whose answers do not overlap with the passage, e.g., Q3 , Q4 and Q5 in Figure 1. However this limitation makes them more effective learners than conversational models which have to generate an answer from an infinite space of possibilities. We use the Document Reader (DrQA) model of Chen et al. (2017), which has demonstrated strong performance on multiple datasets (Rajpurkar et al., 2016; Labutov et al., 2018). Since DrQA requires text spans as answers during training, we select the span which has the highest lexical overlap (F1 score) with the original answer as the gold answer. If the answer appears multiple times in the story we use rationale to find the correct one. If any answer word does not appear in the story, we fall back to an additional unknown token as the answer (about 17%). We prepend each question with its past questions and answers to account for conversation history, similar to the conversational model. 5.3
A Combined Model
bine DrQA with PGNet to address this problem since PGNet can generate free-form answers. In this combined model, DrQA first points to the answer evidence in text, and PGNet naturalizes the evidence into an answer. For example, for Q5 in Figure 1, we expect that DrQA first predicts the rationale R5 , and then PGNet generates A5 from R5 . We make a few changes to DrQA and PGNet based on empirical performance. For DrQA, we only use rationales as answers for the questions with non-extractive answers. For PGNet, we only provide current question and DrQA’s span predictions as input to the encoder.6
6
Evaluation
We evaluate the above models on CoQA. 6.1
Following SQuAD, we use macro-average F1 score of word overlap as our main evaluation metric.7 In SQuAD, for computing a model’s performance, each individual prediction is compared against n human answers resulting in n F1 scores, the maximum of which is chosen as the individual’s F1. However, for computing human performance, a human prediction is only compared against n − 1 human answers, resulting in underestimating human performance. We fix this bias by partitioning n human answers into n different sets, each set containing n − 1 answers, similar to Choi et al. (2018). For each question, we average out F1 across these n sets, both for humans and models. In our final evaluation, we use n = 4 human answers for every question (the original answer and 3 additionally collected answers). The articles a, an and the and punctuations are excluded in evaluation. 6.2
Experimental Setup
For the seq2seq and PGNet experiments, we use the OpenNMT toolkit (Klein et al., 2017). For the DrQA experiments, we use the implementation from the original paper (Chen et al., 2017). We tune the hyperparameters on the development data: the number of turns to use from the conversation history, the number of layers, number of each hidden units per layer and dropout rate. We 6
During training, we feed DrQA’s oracle spans into PGNet. SQuAD also uses exact-match metric, however we think F1 is more appropriate for our dataset because of the abstractive answers. 7
As discussed above, DrQA cannot produce answers that do not overlap with the passage. We com-
Evaluation Metric
Child.
Liter.
In-domain Mid-High.
News
Wiki.
Out-of-domain Reddit Science
In-domain Overall
Out-of-domain Overall
Overall
Development data Seq2seq PGNet DrQA DrQA+PGNet Human
30.6 49.7 52.4 64.5 90.7
26.7 42.4 52.6 62.0 88.3
28.3 44.8 51.4 63.8 89.1
26.3 45.5 56.8 68.0 89.9
26.1 45.0 60.3 72.6 90.9
N/A N/A N/A N/A N/A
N/A N/A N/A N/A N/A
27.5 45.4 54.7 66.2 89.8
N/A N/A N/A N/A N/A
27.5 45.4 54.7 66.2 89.8
25.6 38.6 45.0 57.8 86.7
20.1 38.1 51.0 63.1 88.1
27.7 46.4 54.5 67.0 89.4
23.0 38.3 47.9 60.4 87.4
26.3 44.1 52.6 65.1 88.8
Test data Seq2seq PGNet DrQA DrQA+PGNet Human
32.8 49.0 46.7 64.2 90.2
25.6 43.3 53.9 63.7 88.4
28.0 47.5 54.1 67.1 89.8
27.0 47.5 57.8 68.3 88.6
25.3 45.1 59.4 71.4 89.9
Table 6: Models and human performance (F1 score) on the development and the test data. initialized the word projection matrix with GloVe (Pennington et al., 2014) for conversational models and fastText (Bojanowski et al., 2017) for reading comprehension models, based on empirical performance. We update the projection matrix during training in order to learn embeddings for delimiters such as. More details of hyperparameters can be found in Appendix 2. 6.3
Results and Discussion
Table 6 presents the results of the models on the development and the test data. Considering the results on the test set, the seq2seq model performs the worst, generating frequently occurring answers irrespective of whether these answers appear in the passage or not, a well known behavior of conversational models (Li et al., 2016). PGNet alleviates the frequent response problem by focusing on the vocabulary in the passage and it outperforms seq2seq by 17.8 points. However, it still lags behind DrQA by 8.5 points. A reason could be that PGNet has to memorize the whole passage before answering a question, a huge overhead which DrQA avoids. But DrQA fails miserably in answering questions with free-form answers (see row Abstractive in Table 7). When DrQA is fed into PGNet, we empower both DrQA and PGNet — DrQA in producing freeform answers; PGNet in focusing on the rationale instead of the passage. This combination outperforms vanilla PGNet and DrQA models by 21.0 and 12.5 points respectively. Models vs. Humans The human performance on the test data is 88.8 F1, a strong agreement indicating that the CoQA’s questions have concrete
answers. Our best model is 23.7 points behind humans, suggesting that the task is difficult to accomplish with current models. We anticipate that recent innovations such as self-attention (Clark and Gardner, 2018) and contextualized word embeddings (Peters et al., 2018) may improve the results by a few points. In-domain vs. Out-of-domain All models perform worse on out-of-domain datasets compared to in-domain datasets. The best model drops by 6.6 points. For in-domain results, both the best model and humans find the literature domain harder than the others since literature’s vocabulary requires proficiency in English. For out-of-domain results, the Reddit domain is apparently harder. This could be because Reddit requires reasoning on longer passages (see Table 2). While humans achieve high performance on children’s stories, models perform poorly, probably due to the fewer training examples in this domain compared to others.8 Both humans and models find Wikipedia easy. Error Analysis Table 7 presents fine-grained results of models and humans on the development set. We observe that humans have the highest disagreement on the unanswerable questions. Sometimes, people guess an answer even when it is not present in the passage, e.g., one can guess the age of Annie in Figure 1 based on her grandmother’s age. The human agreement on abstractive answers is lower than on extractive answers. This is expected because our evaluation metric is based on word 8
We collect children’s stories from MCTest which contains only 660 passages in total, of which we use 200 stories for the development and the test sets.
overlap rather than on the meaning of words. For the question did Jenny like her new room?, human answers she loved it and yes are both accepted. Finding the perfect evaluation metric for abstractive responses is still a challenging problem (Liu et al., 2016; Chaganty et al., 2018) and beyond the scope of our work. For our models’ performance, seq2seq and PGNet perform well on the questions with abstractive answers, and DrQA performs well on the questions with extractive answers, due to their respective designs. The combined model improves on both categories. Among the different question types, humans find the questions with lexical matches the easiest followed by paraphrasing, and the questions with pragmatics the hardest — this is expected since questions with lexical matches and paraphrasing share some similarity with the passage, thus making them relatively easier to answer than pragmatic questions. The best model also follows the same trend. While humans find the questions without coreferences easier than those with coreferences (explicit or implicit), the models behave sporadically. It is not clear why humans find implicit coreferences easier than explicit coreferences. A conjecture is that implicit coreferences depend directly on the previous turn whereas explicit coreferences may have long distance dependency on the conversation. Importance of conversation history Finally, we examine how important the conversation history is for the dataset. Table 8 presents the results with a varied number of previous turns used as conversation history. All models succeed at leveraging history but only up to a history of one previous turn (except PGNet). It is surprising that using more turns could decrease the performance. We also perform an experiment on humans to measure the trade-off between their performance and the number of previous turns shown. Based on the heuristic that short questions likely depend on the conversation history, we sample 300 one or two word questions, and collect answers to these varying the number of previous turns shown. When we do not show any history, human performance drops to 19.9 F1 as opposed to 86.4 F1 when full history is shown. When the previous question and answer is shown, their performance boosts to 79.8 F1, suggesting that the previous turn plays an important role in understanding the current question. If the last two questions and answers are shown, they reach up to 85.3 F1, almost close
Type
Seq2seq
PGNet
DrQA
DrQA+ PGNet
Human
Answer Type Answerable Unanswerable
27.5 33.9
45.4 38.2
54.7 55.0
66.3 51.2
89.9 72.3
Extractive Abstractive
20.2 43.1
43.6 49.0
69.8 22.7
70.5 57.0
91.1 86.8
Named Ent. Noun Phrase Yes No Number Date/Time Other
21.9 17.2 69.6 60.2 15.0 13.7 14.1
43.0 37.2 69.9 60.3 48.6 50.2 33.7
72.6 64.9 7.9 18.4 66.3 79.0 53.5
72.2 64.1 72.7 58.7 71.7 79.1 55.2
92.2 88.6 95.6 95.7 91.2 91.5 80.8
Question Type Lexical Mat. Paraphrasing Pragmatics
20.7 23.7 33.9
40.7 33.9 43.1
57.2 46.9 57.4
65.7 64.4 60.6
91.7 88.8 84.2
No coref. Exp. coref. Imp. coref.
16.1 30.4 31.4
31.7 42.3 39.0
54.3 49.0 60.1
57.9 66.3 66.4
90.3 87.1 88.7
Table 7: Fine-grained results of different question and answer types in the development set. For the question type results, we only analyze 150 questions as described in Section 4.3. to the performance when the full history is shown. This suggests that most questions in a conversation have a limited dependency within a bound of two turns.
7
Related work
We organize CoQA’s relation to existing work under the following criteria. Knowledge source We answer questions about text passages — our knowledge source. Another common knowledge source is machine-friendly databases which organize world facts in the form of a table or a graph (Berant et al., 2013; Pasupat and Liang, 2015; Bordes et al., 2015; Saha et al., 2018; Talmor and Berant, 2018). However understanding their structure requires expertise, making it challenging to crowd-source large QA datasets without relying on templates. Like passages, other human friendly sources are images and videos (Antol et al., 2015; Das et al., 2017; Hori et al., 2018). Naturalness There are various ways to curate questions: removing words from a declarative sentence to create a fill-in-the-blank question (Hermann et al., 2015), using a hand-written grammar
history size
Seq2seq
PGNet
DrQA
DrQA+ PGNet
0 1 2 all
24.0 27.5 21.4 21.0
41.3 43.9 44.6 45.4
50.4 54.7 54.6 52.3
61.5 66.2 66.0 64.3
Table 8: Results on the development set with different history sizes. History size indicates the number of previous turns prepended to the current question. Each turn contains a question and its answer. to create artificial questions (Weston et al., 2016; Welbl et al., 2018), paraphrasing artificial questions to natural questions (Saha et al., 2018; Talmor and Berant, 2018) or, in our case, using humans to ask questions (Rajpurkar et al., 2016; Nguyen et al., 2016). While the former enable collecting large and cheap datasets, the latter enable collecting natural questions. Recent efforts emphasize collecting questions without seeing the knowledge source in order to encourage the independence of question and source vocabulary (Dunn et al., 2017; Das et al., 2017; Koˇcisk`y et al., 2018). Since we allow a questioner to see the passage, we incorporate measures to increase independence, although complete independence is not attainable in our setup (Section 3.1). However, an advantage of our setup is that the questioner can validate the answerer on the spot resulting in high agreement data. Conversational Modeling Our focus is on questions that appear in a conversation. Iyyer et al. (2017) and Talmor and Berant (2018) break down a complex question into a series of simple questions mimicking conversational QA. Our work is closest to Das et al. (2017) and Saha et al. (2018) who perform conversational QA on images and a knowledge graph respectively, with the latter focusing on questions obtained by paraphrasing templates. In parallel to our work, Choi et al. (2018) also created a dataset of conversations in the form of questions and answers on text passages. However, in our interface, we show a passage to both the questioner and the answerer, whereas their interface only shows a title to the questioner and the full passage to the answerer. As a result, their answers are as long as 15.1 words on average (ours is 2.7), because their setup encourages the answerer to reveal more information for the following questions. Human performance on our test set is 88.8 F1 whereas
theirs is 74.6 F1. Moreover, while CoQA’s answers can be abstractive, their answers are restricted only to extractive text spans. Our dataset contains passages from seven diverse domains, whereas their dataset is built only from Wikipedia articles about people. Due to the conversational nature of questions and answers, our work can also be defined as an instantiation of a dialog problem (Weizenbaum, 1966; Cohen et al., 1982; Young et al., 2013; Serban et al., 2018). A limitation of the dialog in CoQA is that an answerer can only answer questions but in a real dialog the answerer can also ask clarification questions. Reasoning Our dataset is a testbed of various reasoning phenomena occurring in the context of a conversation (Section 4). Our work parallels a growing interest in developing datasets that test specific reasoning abilities: algebraic reasoning (Clark, 2015), logical reasoning (Weston et al., 2016), common sense reasoning (Ostermann et al., 2018) and multi-fact reasoning (Welbl et al., 2018; Khashabi et al., 2018; Talmor and Berant, 2018).
8
Conclusions
In this paper, we introduced CoQA, a large scale dataset for building conversational question answering systems. Unlike existing reading comprehension datasets, CoQA contains conversational questions, natural answers along with extractive spans as rationales, and text passages from diverse domains. Our experiments show that existing conversational and reading comprehension models are no match for human competence on CoQA. We hope this work will stir more research in conversational modeling, which is a key ingredient for enabling natural human-machine communication.
Acknowledgements We would like to thank MTurk workers, especially the Master Chatters and the MTC forum members, for contributing to the creation of CoQA, for giving feedback on various pilot interfaces, and for promoting our hits enthusiastically on various forums. CoQA has been made possible with the financial support from the SAIL Toyota Grant, the Facebook ParlAI and the Amazon Research awards. Danqi is supported by a Facebook PhD fellowship. We also would like to thank the members of the Stanford NLP group for critical feedback on the interface
and experiments. We especially thank Drew Arad Hudson for participating in initial discussions, and Matt Lamm for proof-reading the paper. We also thank VisDial team and Spandana Gella for their help in generating Figure 3.
References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In International Conference on Computer Vision (ICCV). Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations (ICLR). Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic Parsing on Freebase from Question-Answer Pairs. In Empirical Methods in Natural Language Processing (EMNLP). Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale Simple Question Answering with Memory Networks. arXiv preprint arXiv:1506.02075. Arun Chaganty, Stephen Mussmann, and Percy Liang. 2018. The price of debiasing automatic metrics in natural language evalaution. In Association for Computational Linguistics (ACL). Danqi Chen, Jason Bolton, and Christopher D Manning. 2016. A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task. Association for Computational Linguistics (ACL). Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to Answer Open-Domain Questions. In Association for Computational Linguistics (ACL). Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke
Zettlemoyer. 2018. QuAC: Question Answering in Context. In Empirical Methods in Natural Language Processing (EMNLP). Christopher Clark and Matt Gardner. 2018. Simple and Effective Multi-Paragraph Reading Comprehension. In Association for Computational Linguistics (ACL). Peter Clark. 2015. Elementary School Science and Math Tests as a Driver for AI: Take the Aristo Challenge! In Association for the Advancement of Artificial Intelligence (AAAI). Philip R Cohen, C Raymond Perrault, and James F Allen. 1982. Beyond Question Answering. In Strategies for Natural Language Processing, pages 245–274. New York: Psychology Press. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual Dialog. In Computer Vision and Pattern Recognition (CVPR). Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. SearchQA: A new Q&A dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical Neural Story Generation. In Association for Computational Linguistics (ACL). Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (NIPS). Chiori Hori, Huda Alamri, Jue Wang, Gordon Winchern, Takaaki Hori, Anoop Cherian, Tim K Marks, Vincent Cartillier, Raphael Gontijo Lopes, Abhishek Das, and others. 2018. Endto-End Audio Visual Scene-Aware Dialog using Multimodal Attention-Based Video Features. arXiv preprint arXiv:1806.08409. Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. 2017. Search-based Neural Structured Learning for Sequential Question Answering. In Association for Computational Linguistics (ACL).
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. In Association for Computational Linguistics (ACL).
Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Association for Computational Linguistics (ACL).
Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences. In North American Chapter of the Association for Computational Linguistics (NAACL).
Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A Dialog Research Software Platform. In Empirical Methods in Natural Language Processing (EMNLP).
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Association for Computational Linguistics (ACL), System Demonstrations, pages 67–72. Tomáš Koˇcisk`y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The NarrativeQA Reading Comprehension Challenge. Transactions of the Association for Computational Linguistics, 6:317–328. Igor Labutov, Bishan Yang, Anusha Prakash, and Amos Azaria. 2018. Multi-Relational Question Answering from Narratives: Machine Reading and Reasoning in Simulated Worlds. In Association for Computational Linguistics (ACL). Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Largescale ReAding Comprehension Dataset From Examinations. In Empirical Methods in Natural Language Processing (EMNLP). Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A DiversityPromoting Objective Function for Neural Conversation Models. In North American Chapter of the Association for Computational Linguistics (NAACL). Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation. In Empirical Methods in Natural Language Processing (EMNLP).
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. arXiv preprint arXiv:1611.09268. Simon Ostermann, Michael Roth, Ashutosh Modi, Stefan Thater, and Manfred Pinkal. 2018. SemEval-2018 Task 11: Machine Comprehension Using Commonsense Knowledge. In International Workshop on Semantic Evaluation (SemEval). Panupong Pasupat and Percy Liang. 2015. Compositional Semantic Parsing on Semi-Structured Tables. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP). Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Empirical Methods in Natural Language Processing (EMNLP). Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In North American Chapter of the Association for Computational Linguistics (NAACL). Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know What You Don’t Know: Unanswerable Questions for SQuAD. In Association for Computational Linguistics (ACL). Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Empirical Methods in Natural Language Processing (EMNLP).
Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text. In Empirical Methods in Natural Language Processing (EMNLP).
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. NewsQA: A Machine Comprehension Dataset. arXiv preprint arXiv:1611.09830.
Alan Ritter, Colin Cherry, and William B. Dolan. 2011. Data-Driven Response Generation in Social Media. In Empirical Methods in Natural Language Processing (EMNLP).
Oriol Vinyals and Quoc Le. 2015. A Neural Conversational Model. arXiv preprint arXiv:1506.05869.
Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra, Karthik Sankaranarayanan, and Sarath Chandar. 2018. Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph. In Association for the Advancement of Artificial Intelligence (AAAI).
Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making Neural QA as Simple as Possible but not Simpler. In Computational Natural Language Learning (CoNLL). Joseph Weizenbaum. 1966. ELIZA – a Computer Program for the Study of Natural Language Communication Between Man and Machine. Commun. ACM, 9(1):36–45.
Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get To The Point: Summarization with Pointer-Generator Networks. In Annual Meeting of the Association for Computational Linguistics (ACL).
Johannes Welbl, Nelson F. Liu, and Matt Gardner. 2017. Crowdsourcing Multiple Choice Science Questions. In Workshop on Noisy Usergenerated Text.
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. International Conference on Learning Representations (ICLR).
Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing Datasets for Multihop Reading Comprehension Across Documents. Transactions of the Association for Computational Linguistics, 6:287–302.
Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, and Joelle Pineau. 2016. Generative Deep Neural Networks for Dialogue: A Short Review. arXiv preprint arXiv:1611.06216.
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merriônnboer, Armand Joulin, and Tomas Mikolov. 2016. Towards AI-complete question answering: A set of prerequisite toy tasks. In International Conference on Learning Representations (ICLR).
Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, and Joelle Pineau. 2018. A Survey of Available Corpora For Building DataDriven Dialogue Systems: The Journal Version. Dialogue & Discourse, 9(1):1–49.
Steve Young, Milica Gaši´c, Blaise Thomson, and Jason D Williams. 2013. POMDP-Based Statistical Spoken Dialog Systems: A review. Proceedings of the IEEE, 101(5):1160–1179.
Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A Neural Network Approach to ContextSensitive Generation of Conversational Responses. In North American Chapter of the Association for Computational Linguistics (NAACL).
Adams Wei Yu, David Dohan, Quoc Le, Thang Luong, Rui Zhao, and Kai Chen. 2018. Fast and Accurate Reading Comprehension by Combining Self-Attention and Convolution. In International Conference on Learning Representations (ICLR).
Alon Talmor and Jonathan Berant. 2018. The Web as a Knowledge-Base for Answering Complex Questions. In North American Chapter of the Association for Computational Linguistics (NAACL).
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing Dialogue Agents: I have a dog, do you have pets too? In Association for Computational Linguistics (ACL).
Supplementary Material This supplementary material provides additional examples in Figure 5, Figure 6, and Figure 7, the questioner and answerer annotation interfaces in Figure 8 and Figure 9. We also give the worker selection criteria in Appendix 1 and the hyperparameters we used for training neural models in Appendix 2. Appendix 1: Worker selection First each worker has to pass a qualification test that assesses their understanding of the guidelines of conversational QA.The success rate for the qualification test is 57% with 960 attempted workers. The guidelines indicate this is a conversation about a passage in the form of questions and answers, an example conversation and dos and don’ts. However, we give complete freedom for the workers to judge what is good and bad during the real conversation. This helped us in curating diverse categories of questions that were not present in the guidelines (e.g., true or false, fill in the blank and time series questions). We pay workers an hourly wage around 8–15 USD. Appendix 2: Hyperparameters for neural models For all the experiments of seq2seq and PGNet, we use the default settings of OpenNMT: 2-layers of LSTMs with 500 hidden units for both the encoder and the decoder. The models are optimized using SGD, with an initial learning rate of 1.0 and a decay rate of 0.5. A dropout rate of 0.3 is applied to all layers. For all the experiments of DrQA, the best configuration we find is 3 layers of LSTMs with 300 hidden units for each layer. A dropout rate of 0.4 is applied to all LSTM layers and a dropout rate of 0.5 is applied to word embeddings. We used Adam to optimize DrQA models.
Marsha loves playing with her noodle friend. She had it for a long time so it is now a dark brown color. When her mom first made it, it was white. The night she met her noodle friend was spaghetti night. Marsha’s favorite dinner was spaghetti, which happened to be every Tuesday night. On one Tuesday, a piece of spaghetti fell on the kitchen floor. To Marsha, it looked like a stick man so she kept him. She named her new noodle friend Joey and took him everywhere she went. Sometimes Joey gets a little dried out so Marsha’s mom told her to soak him in water every few days. There were a couple times that the family dog, Mika, has tried to take Joey from Marsha and eat him! So from now on, Marsha takes extra special care to make sure Joey is safe and sound at all times. During the day she keeps him in a plastic bag in her pocket. At night, she puts him under her pillow. She loves Joey and wants to always be friends with him. Q1 : Is Joey a male or female? A1 : Male R1 : Joey and took him everywhere she went Q2 : Who is he? A2 : A piece of spaghetti R2 : a piece of spaghetti fell on the kitchen floor. To Marsha, it looked like a stick man so she kept him. She named her new noodle friend Joey Q3 : Who named him? A3 : Marsha R3 : To Marsha, it looked like a stick man so she kept him. She named her new noodle friend Joey Q4 : Who made him? A4 : Her mom R4 : When her mom first made it, it was white Q5 : Why did she make him? A5 : For spaghetti night R5 : The night she met her noodle friend was spaghetti night Q6 : When was spaghetti night? A6 : Tuesday night R6 : spaghetti, which happened to be every Tuesday night. Q7 : Where does he live? A8 : A plastic bag R9 : During the day she keeps him in a plastic bag in her pocket. Q8 : Does he stay fresh? A8 : Yes A8 : Sometimes Joey gets a little dried out so Marsha’s mom told her to soak him in water every few days. Q9 : How? A9 : She soaks him in water R9 : Sometimes Joey gets a little dried out so Marsha’s mom told her to soak him in water Q10 : Does she carry him? A10 : Yes R10 : She named her new noodle friend Joey and took him everywhere she went. ...
Figure 5: In this example, the conversation flows backward in the first few turns. Colors indicate coreference chains.
Anthropology is the study of humans and their societies in the past and present. Its main subdivisions are social anthropology and cultural anthropology, which describes the workings of societies around the world, ... Similar organizations in other countries followed: The American Anthropological Association in 1902, the Anthropological Society of Madrid (1865), the Anthropological Society of Vienna (1870), the Italian Society of Anthropology and Ethnology (1871), and many others subsequently. The majority of these were evolutionist. One notable exception was the Berlin Society of Anthropology (1869) founded by Rudolph Virchow, known for his vituperative attacks on the evolutionists. Not religious himself, he insisted that Darwin’s conclusions lacked empirical foundation. Q: Who disagreed with Darwin? A: Rudolph Virchow R: Rudolph Virchow, known for his vituperative attacks on the evolutionists. Not religious himself, he insisted that Darwin’s conclusions lacked empirical foundation. Q: What did he found? A: the Berlin Society of Anthropology R: the Berlin Society of Anthropology (1869) founded by Rudolph Virchow Q: In what year? A: 1869 R: the Berlin Society of Anthropology (1869) Q: What was founded in 1865? A: the Anthropological Society of Madrid R: the Anthropological Society of Madrid (1865) Q: And in 1870? A: the Anthropological Society of Vienna R: the Anthropological Society of Vienna (1870) Q: How much later was the Italian Sociaty of Anthropology and Ethnology founded? A: One year R: the Anthropological Society of Vienna (1870), the Italian Society of Anthropology and Ethnology (1871) Q: Was the American Anthropological Association founded before or after that? A: after R: The American Anthropological Association in 1902 Q: In what year? A: 1902 R: The American Anthropological Association in 1902 Q: Was it an evolutionist organization? A: Yes R: The majority of these were evolutionist ...
Figure 6: In this example, the questioner explores questions related to time.
New Jersey is a state in the Northeastern and mid-Atlantic regions of the United States. It is a peninsula, bordered on the north and east by the state of New York; on the east, southeast, and south by the Atlantic Ocean; on the west by the Delaware River and Pennsylvania; and on the southwest by the Delaware Bay and Delaware. New Jersey is the fourthsmallest state by area but the 11th-most populous and the most densely populated of the 50 U.S. states. New Jersey lies entirely within the combined statistical areas of New York City and Philadelphia and is the third-wealthiest state by median household income as of 2016. Q: Where is New jersey located? A: In the Northeastern and mid-Atlantic regions of the US. R: New Jersey is a state in the Northeastern and mid-Atlantic regions of the United States Q: What borders it to the North and East? A: New York R: bordered on the north and east by the state of New York; Q: Is it an Island? A: No. R: It is a peninsula Q: What borders to the south? A: Atlantic Ocean R: bordered on the north and east by the state of New York; on the east, southeast, and south by the Atlantic Ocean Q: to the west? A: Delaware River and Pennsylvania. R: on the west by the Delaware River and Pennsylvania; Q: is it a small state? A: Yes. R: New Jersey is the fourth-smallest state by area Q: How many people live there? A: unknown R: N/A Q: Do a lot of people live there for its small size? A: Yes. R: the most densely populated of the 50 U.S. states. Q: Is it a poor state? A: No. R: Philadelphia and is the third-wealthiest state by median household income as of 2016. Q: What country is the state apart of? A: United States R: New Jersey is a state in the Northeastern and mid-Atlantic regions of the United States ...
Figure 7: A conversation containing No and unknown as answers.
Figure 8: Questioner Interface
Figure 9: Answerer Interface