挑战杯-智能新闻

2018第三学期

headline generation新闻标题生成

“Automatic headline generation is an important research area within text summarization and sentence compression. “

A Neural Attention Model for Abstractive Sentence Summarization 2015

This paper wants to max P(y_i+1|x,y_c;θ), tries three encoder methods: Bag-of-Words,CNN, Attention-Based.And the decoder is NNLM.

code for ABS

Controlling Output Length in Neural Encoder-Decoders EMNLP2016

Neural encoder-decoder models have demonstrated great promise in many sequence generation tasks, more important, this model is able to control the length of the summarization text by feeding to the Seq2seq base model a label that indicates the intended output length in addition to the source input

是一种加label的idea,uses a mechanism to control the summary length by considering the length embedding vector as the input

Neural Headline Generation on Abstract Meaning Representation EMNLP2016

This paper utilize encoder-decoder neural networks for generating abstractive summaries. Abstract meaning representation is utilized to incorporate syntactic and semantic information of input sentence into the headline generation model.

Neural Headline Generation with Minimum Risk Training 2016

This paper proposes a minimum risk training method to directly optimize the evaluation metrics and the results show that optimizing for ROUGE improves the test performance.

Neural Headline Generation with Sentence-wise Optimization 2016

As traditional neural network utilizes maximum likelihood estimation for parameter optimization, it essentially constrains the expected training objective within word level rather than sentence level.To overcome these drawbacks, this paper employs minimum risk training strategy, which directly optimizes model parameters in sentence level with respect to evaluation metrics and leads to significant improvements for headline generation.

Abstractive Sentence Summarization with Attentive Recurrent Neural Networks 2016

Seq2seq model with CNN encoder and attention mechanism has achieved good results on abstractive summarization from a single sentence, what’s more, the model has been extended to use recurrent neural network as decoder.This paper reaching a ROUGE-1 score of 35.51 on the Gigaword data.

Selective Encoding for Abstractive Sentence Summarization ACL2017

This paper is achieved by selectively encoding words as a process of distilling salient information(proposed selective gate to improve the attention in abstractive summarization), using BiGRU encoders and GRU decoders with selective encoding. In fact, this paper pointed out that there are salient problems in the attention mechanism. Which means, there is no obvious alignment relationship between the source text and the target summary, and the encoder outputs contain noise for the attention.

From Neural Sentence Summarization to Headline Generation: A Coarse-to-Fine Approach IJCAI2017

This paper proposes a coarse-to-fine approach, which first identifies the important sentences of a document using document summarization techniques, and then exploits a multi-sentence summarization model with hierarchical attention to leverage the important sentences for headline generation.

Learning to Explain Ambiguous Headlines of Online News IJCAI2018

table2docs 新闻生成

Content Selection for Real-time Sports News Construction from Commentary Texts INLG 2017

Rather than receiving every piece of text of a sports match before news construction, as in previous related work, they novelly verify the feasibility of a more challenging setting to generate news report on the fly by treating live text input as a stream. This paper designs scoring functions to address different requirements of the task and use stream substitution for sentence selection.

Towards Automatic Construction of News Overview Articles by News Synthesis 2017

This paper investigates a new task of automatically constructing an overview article from a given set of news articles about a news event. They propose a news synthesis approach to address this task based on passage segmentation, ranking, selection and merging. 还是多文档摘要

Table-to-text Generation by Structure-aware Seq2seq Learning AAAI2018

To encode both the content and the structure of a table, this paper proposes a novel structure-aware seq2seq architecture which consists of field-gating encoder and description generator with dual attention. They introduce a modified LSTM that adds a field gate into the LSTM to incorporate the structured data. Further, they use a dual attention mechanism that combines attention of both the slot names and the actual slot content.

code