NewFashionQA

青年基金调研

多轮问答(document based dialogue)

DREAM: A Challenge Dataset and Models for Dialogue-Based Reading Comprehension https://dataset.org/ dream/.

Challenging reading comprehension on daily conversation: Passage completion on mul- tiparty dialog.

QuAC: Question an- swering in context

A Dataset for Document Grounded Conversations

OF WIKIPEDIA:
KNOWLEDGE-POWERED CONVERSATIONAL AGENTS

Towards Exploiting Background Knowledge for Building Conversation Systems

综述

A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC

Emergent Logical Structure in Vector Representations of Neural Readers[https://arxiv.org/pdf/1611.07954v1.pdf]

机器阅读理解任务综述 林鸿宇 韩先培[https://mp.weixin.qq.com/s?__biz=MzIxNzE2MTM4OA==&mid=2665643130&idx=1&sn=5f75f0d4978289caea6c4cb37b0b74c4\]

自动问答综述 郑实福, 刘挺, 秦兵, 李生 - 中文信息学报

面向问答社区的答案摘要方法研究综述 刘秉权, 徐振, 刘峰, 刘铭, 孙承杰, 王晓龙 - 中文信息学报

问答系统综述 李沛晏, 朱露, 吴多胜 - 数字技术与应用

特定领域问答系统中基于语义检索的非事实型问题研究仇瑜, 程力 - 北京大学学报 (自然科学版), 2019

基于汉语篇章框架语义分析的阅读理解问答研究
王智强, 李茹, 梁吉业, 张旭华, 武娟, 苏娜 - 计算机学报

基于 Web 的问答系统综述 李舟军, 李水华 - 计算机科学, 2017

智能问答系统在医学领域的应用研究 贺佳, 杜建强, 聂斌, 熊旺平, 罗计根 - 医学信息, 2018

基于粗糙集知识发现的开放领域中文问答检索
韩朝, 苗夺谦, 任福继, 张红云 - 计算机研究与发展, 201

Datasets

MCTest
http://research.microsoft.com/en-us/um/redmond/projects/mctest/data.html

bAbI
https://research.fb.com/projects/babi/

WikiQA
https://www.microsoft.com/en-us/download/details.aspx?id=52419

SNLI
http://nlp.stanford.edu/projects/snli/

Children’s Book Test
https://research.fb.com/projects/babi/

BookTest
https://ibm.ent.box.com/v/booktest-v1

CNN / Daily Mail
http://cs.nyu.edu/~kcho/DMQA/

Who Did What
https://tticnlp.github.io/who_did_what/download.html

NewsQA
http://datasets.maluuba.com/NewsQA

SQuAD
https://rajpurkar.github.io/SQuAD-explorer/

LAMBADA
http://clic.cimec.unitn.it/lambada/

MS MARCO
http://www.msmarco.org/dataset.aspx

WikiMovies
https://research.fb.com/projects/babi/

WikiReading
https://github.com/dmorr-google/wiki-reading

CoQA: A conversational question answering challenge.

SemEval-2018 Task 11: Machine comprehen- sion using commonsense knowledge.

ODSQA: Open-domain spoken question an- swering dataset.

Looking beyond the surface: A challenge set for reading comprehension over multiple sen- tences

SearchQA: A new Q&A dataset augmented with context from a search engine

论文

Teaching Machines to Read and Comprehend
[https://arxiv.org/abs/1506.03340]

Learning to Ask: Neural Question Generation for Reading Comprehension
https://arxiv.org/pdf/1705.00106.pdf

Attention-over-Attention Neural Networks for Reading Comprehension
https://arxiv.org/pdf/1607.04423.pdf

R-NET: MACHINE READING COMPREHENSION WITH SELF-MATCHING NETWORKS
https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf

Mnemonic Reader for Machine Comprehension
https://arxiv.org/pdf/1705.02798.pdf

TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
https://arxiv.org/pdf/1705.03551.pdf

S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension
https://arxiv.org/pdf/1706.04815.pdf

RACE: Large-scale ReAding Comprehension Dataset From Examinations
https://arxiv.org/pdf/1704.04683.pdf

Adversarial Examples for Evaluating Reading Comprehension Systems
https://arxiv.org/pdf/1707.07328.pdf

Machine comprehension using match-lstm and answer pointer
[https://arxiv.org/pdf/1608.07905]

Multi-perspective context matching for machine comprehension
[https://arxiv.org/abs/1612.04211]

Reasonet: Learning to stop reading in machine comprehension
[http://dl.acm.org/citation.cfm?id=3098177]

Learning recurrent span representations for extractive question answering
[https://arxiv.org/abs/1611.01436]

End-to-end answer chunk extraction and ranking for reading comprehension
[https://arxiv.org/abs/1610.09996]

Words or characters? fine-grained gating for reading comprehension
[https://arxiv.org/abs/1611.01724]

Reading Wikipedia to Answer Open-Domain Questions
[https://arxiv.org/abs/1704.00051]

An analysis of prerequisite skills for reading comprehension
[http://www.aclweb.org/anthology/W/W16/W16-60.pdf#page=13]

A Comparative Study of Word Embeddings for Reading Comprehension
https://arxiv.org/pdf/1703.00993.pdf

Chen et al., 2017;Reading wikipedia to answer open- domain questions.

Huang et al., 2018;Flowqa: Grasping flow in history for conversational machine comprehension

Yu et al., 2018; Qanet: Combining local convolution with global self-attention for reading comprehen sion.

Zhipeng Chen, Yiming Cui, Wentao Ma, Shijin Wang, and Guoping Hu. 2019. Convolutional spatial attention model for reading comprehen- sion with multiple-choice questions.

Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2017. Attention- over-attention neural networks for reading com- prehension.

Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2017. Gated-attention readers for text com- prehension.

一些模型:

1 Memory Networks
1.1 Memory Network
1.2 End-To-End Memory Networks
1.3 Ask Me Anything: Dynamic Memory Networks for Natural Language Processing
1.4 Key-Value Memory Networks for Directly Reading Documents
1.5 The Goldilocks Principle: Reading Children’s Books with Explicit Memory Representations
1.6 Can Active Memory Replace Attention?
2 DeepMind Attentive Reader
2.1 Teaching Machines to Read and Comprehend
3 Danqi’s Stanford Reader
3.1 [A thorough examination of the cnn/daily
4 Attention Sum Reader
5 Gated Attention Sum Reader
6 Attention Over Attention Reader
6.1 Attention-over-Attention Neural Networks for Reading Comprehension