Get Instant Help From 5000+ Experts For
question

Writing: Get your essay and assignment written from scratch by PhD expert

Rewriting: Paraphrase or rewrite your friend's essay with similar meaning at reduced cost

Editing:Proofread your work by experts and improve grade at Lowest cost

And Improve Your Grades
myassignmenthelp.com
loader
Phone no. Missing!

Enter phone no. to receive critical updates and urgent messages !

Attach file

Error goes here

Files Missing!

Please upload all relevant files for quick & complete assistance.

Guaranteed Higher Grade!
Free Quote
wave
Unsupervised Summarization for Chat Logs with Topic-Oriented Ranking and Context-Aware Auto-Encoders

RankAE Framework

Paper Summary : Unsupervised Summarization for Chat Logs with Topic-Oriented Ranking and Context-Aware Auto-Encoders

People can swiftly absorb crucial information from a large number of chat messages with the help of automatic chat summary. Chat logs, unlike traditional texts, generally feature fragmented and shifting subjects. Furthermore, because these logs contain a large number of elliptical and interrogative words, the conversation summarization is highly context dependent.

Authors offer RankAE, an unique unsupervised framework for performing conversation summarization without using explicitly labelled data, in this paper. RankAE is made up of a topic-oriented ranking strategy that selects subject utterances based on centrality and diversity at the same time, as well as a denoising auto-encoder that is meticulously intended to provide concise but context-informative summaries based on the selected utterances. Authors offer RankAE, a novel unsupervised neural framework that benefits from both extractive and abstractive paradigms, to address the subject shift problem in chat logs and the information integrity problem of individual utterances.

Fig 1 : RankAE Framework

A topic utterance extractor and a denoising auto-encoder (DAE) are two components of the RankAE (Vincent et al. 2008). We create a chat segment out of each utterance and its surrounding utterances. The extractor learns to predict the relevance score for each utterance pair during training. Meanwhile, noisy content is added to chat segments, which is later recovered by training the DAE generator. Topic utterances are chosen at inference time using a diversity-enhanced ranking algorithm based on relevance scores. The auto-encoder then compresses all topic segments (the chat section of a topic utterance) by filtering out unnecessary information.

Authors collect a large-scale dataset of chat logs from a customer service setting and develop an annotated set specifically for model evaluation to evaluate the suggested strategy. Experiments show that RankAE beats other unsupervised approaches in terms of relevance and topic coverage, and that it can provide high-quality summaries.

Authors randomly sampled 1000 chat log examples for summary annotation, 500 validation examples and 500 test examples, to perform model evaluation. The remaining chat records were left unmarked and used for training purposes. Three skilled and impartial professionals annotated all gold summaries using the same criterion. We began by extracting topic points from each talk, such as price and logistics. Then, as sub-summaries, theme points were extended into brief and full phrases that describe the essential ideas given by the original dialogue. The final summary was made up of these sub-summaries.

RankAE's ability to extract subject utterances

To evaluate the approaches, authors employ ROUGE (Lin 2004) and BLEU (Papineni et al. 2002). ROUGE-1 (RG1), ROUGE-2 (RG-2) and ROUGE-L (RG-L) F-scores are used to compare the overlaps in unigram, bigram, and longest common sequence between the references and prediction summaries.In these experiments, BLEU assesses n-gram precision, and we report averaged results with a maximum of 4-grams.

The results reveal that RankAE(Ext.) outperforms other extractive approaches on all measures, demonstrating the usefulness of the topic oriented ranking strategy for conversation summarization. The whole framework with DAE generator enhances the results by a substantial margin when compared to RankAE(Ext.) (+2.53, +1.72, +3.22 on ROUGE-1/2/L). It demonstrates that, in addition to the extractive paradigm, our approach can include context information and generate summaries that are more relevant to the original conversation logs.

When used with BERT, RankAE provides much more benefit. Using a Wilcoxon signed-rank test with p0.05, the findings of RankAE show a statistically significant difference from all other approaches (except RankAE-BERT), proving the usefulness of RankAE, which benefits from both extractive and abstractive paradigms.

Table 4 presents a Chinese-translated example that tests RankAE's ability to extract subject utterances and generate short and context-informative summaries.

The chat is divided into two sections, one dealing with pricing and the other with delivery concerns.

Two topic-relevant utterances are correctly identified using RankAE(Ext.). However, crucial information such as 'price' and 'tomorrow' is omitted. The necessary information is reinforced by collecting contexts in the chat segment, but nonessential phrases and repeated utterances are also included, which are highlighted in red. With DAE, RankAE can filter out the irrelevant items and generate a concise and comprehensive summary.

In summary, the main three contributions:

1) The authors present a unique neural architecture for fully unsupervised chat log summarization.

2) The framework takes advantage of both extractive and abstractive paradigms, which can acquire critical and topic-diverse data while also producing concise and context-aware summaries.

3) Extensive research on a large real-world chat log dataset demonstrates the efficacy of our strategy in a variety of ways.

For conversation summarization, the authors used a variety of comparison algorithms, all of which were developed in unsupervised circumstances.Lead (Nallapati et al. 2017) just extracts the first few phrases of a document as a summary, which may be thought of as the bottom bound of extractive approaches. Oracle (Nallapati et al. 2017) selects the top performing sentences against the gold summary using a greedy approach. It denotes the upper limit of extractive techniques. TextRank (Mihalcea and Tarau 2004) uses a graph-based rating system to turn documents into graphs and choose sentences.

support
close