gem_id stringlengths 37 41 | paper_id stringlengths 3 4 | paper_title stringlengths 19 183 | paper_abstract stringlengths 168 1.38k | paper_content sequence | paper_headers sequence | slide_id stringlengths 37 41 | slide_title stringlengths 2 85 | slide_content_text stringlengths 11 2.55k | target stringlengths 11 2.55k | references list |
|---|---|---|---|---|---|---|---|---|---|---|
GEM-SciDuet-train-1#paper-954#slide-0 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.3",
"4",
"4.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Parser as Syntactic Language Model in",
"Incremental syntactic language model",
"Incorporating a Syntactic Language Mod... | GEM-SciDuet-train-1#paper-954#slide-0 | Syntax in Statistical Machine Translation | Translation Model vs Language Model
Syntactic LM Decoder Integration Results Questions? | Translation Model vs Language Model
Syntactic LM Decoder Integration Results Questions? | [] |
GEM-SciDuet-train-1#paper-954#slide-1 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.3",
"4",
"4.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Parser as Syntactic Language Model in",
"Incremental syntactic language model",
"Incorporating a Syntactic Language Mod... | GEM-SciDuet-train-1#paper-954#slide-1 | Syntax in the Language Model | Translation Model vs Language Model
Syntactic LM Decoder Integration Results Questions?
An incremental syntactic language model uses an incremental statistical parser to define a probability model over the dependency or phrase structure of target language strings.
Phrase-based decoder produces translation in the target language incrementally from left-to-right
Phrase-based syntactic LM parser should parse target language hypotheses incrementally from left-to-right
Galley & Manning (2009) obtained 1-best dependency parse using a greedy dependency parser
We use a standard HHMM parser (Schuler et al., 2010)
Engineering simple model, equivalent to PPDA
Algorithmic elegant fit into phrase-based decoder
Cognitive nice psycholinguistic properties | Translation Model vs Language Model
Syntactic LM Decoder Integration Results Questions?
An incremental syntactic language model uses an incremental statistical parser to define a probability model over the dependency or phrase structure of target language strings.
Phrase-based decoder produces translation in the target language incrementally from left-to-right
Phrase-based syntactic LM parser should parse target language hypotheses incrementally from left-to-right
Galley & Manning (2009) obtained 1-best dependency parse using a greedy dependency parser
We use a standard HHMM parser (Schuler et al., 2010)
Engineering simple model, equivalent to PPDA
Algorithmic elegant fit into phrase-based decoder
Cognitive nice psycholinguistic properties | [] |
GEM-SciDuet-train-1#paper-954#slide-2 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.3",
"4",
"4.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Parser as Syntactic Language Model in",
"Incremental syntactic language model",
"Incorporating a Syntactic Language Mod... | GEM-SciDuet-train-1#paper-954#slide-2 | Incremental Parsing | DT NN VP PP
The president VB NP IN NP
meets DT NN on Friday NP/NN NN VP/NP DT board
Motivation Decoder Integration Results Questions?
the president VB NP VP/NN
Transform right-expanding sequences of constituents into left-expanding sequences of incomplete constituents
NP VP S/NP NP
the board DT president VB the
Incomplete constituents can be processed incrementally using a
Hierarchical Hidden Markov Model parser. (Murphy & Paskin, 2001; Schuler et al. | DT NN VP PP
The president VB NP IN NP
meets DT NN on Friday NP/NN NN VP/NP DT board
Motivation Decoder Integration Results Questions?
the president VB NP VP/NN
Transform right-expanding sequences of constituents into left-expanding sequences of incomplete constituents
NP VP S/NP NP
the board DT president VB the
Incomplete constituents can be processed incrementally using a
Hierarchical Hidden Markov Model parser. (Murphy & Paskin, 2001; Schuler et al. | [] |
GEM-SciDuet-train-1#paper-954#slide-3 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.3",
"4",
"4.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Parser as Syntactic Language Model in",
"Incremental syntactic language model",
"Incorporating a Syntactic Language Mod... | GEM-SciDuet-train-1#paper-954#slide-3 | Incremental Parsing using HHMM Schuler et al 2010 | Hierarchical Hidden Markov Model
Circles denote hidden random variables
Edges denote conditional dependencies
NP/NN NN VP/NP DT board
Isomorphic Tree Path DT president VB the
Shaded circles denote observed values
Motivation Decoder Integration Results Questions?
Analogous to Maximally Incremental
e1 =The e2 =president e3 =meets e4 =the e5 =board e =on e7 =Friday
Push-Down Automata NP VP/NN NN | Hierarchical Hidden Markov Model
Circles denote hidden random variables
Edges denote conditional dependencies
NP/NN NN VP/NP DT board
Isomorphic Tree Path DT president VB the
Shaded circles denote observed values
Motivation Decoder Integration Results Questions?
Analogous to Maximally Incremental
e1 =The e2 =president e3 =meets e4 =the e5 =board e =on e7 =Friday
Push-Down Automata NP VP/NN NN | [] |
GEM-SciDuet-train-1#paper-954#slide-4 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.3",
"4",
"4.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Parser as Syntactic Language Model in",
"Incremental syntactic language model",
"Incorporating a Syntactic Language Mod... | GEM-SciDuet-train-1#paper-954#slide-4 | Phrase Based Translation | Der Prasident trifft am Freitag den Vorstand
The president meets the board on Friday
s president president Friday
s that that president Obama met
AAAAAA EAAAAA EEAAAA EEIAAA
s s the the president president meets
Stack Stack Stack Stack
Motivation Syntactic LM Results Questions? | Der Prasident trifft am Freitag den Vorstand
The president meets the board on Friday
s president president Friday
s that that president Obama met
AAAAAA EAAAAA EEAAAA EEIAAA
s s the the president president meets
Stack Stack Stack Stack
Motivation Syntactic LM Results Questions? | [] |
GEM-SciDuet-train-1#paper-954#slide-5 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.3",
"4",
"4.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Parser as Syntactic Language Model in",
"Incremental syntactic language model",
"Incorporating a Syntactic Language Mod... | GEM-SciDuet-train-1#paper-954#slide-5 | Phrase Based Translation with Syntactic LM | represents parses of the partial translation at node h in stack t
s president president Friday
s that that president Obama met
AAAAAA EAAAAA EEAAAA EEIAAA
s s the the president president meets
Stack Stack Stack Stack
Motivation Syntactic LM Results Questions? | represents parses of the partial translation at node h in stack t
s president president Friday
s that that president Obama met
AAAAAA EAAAAA EEAAAA EEIAAA
s s the the president president meets
Stack Stack Stack Stack
Motivation Syntactic LM Results Questions? | [] |
GEM-SciDuet-train-1#paper-954#slide-6 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | "This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machi(...TRUNCATED) | {"paper_content_id":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29(...TRUNCATED) | {"paper_header_number":["1","2","3","3.1","3.3","4","4.1","6","7"],"paper_header_content":["Introduc(...TRUNCATED) | GEM-SciDuet-train-1#paper-954#slide-6 | Integrate Parser into Phrase based Decoder | "EAAAAA EEAAAA EEIAAA EEIIAA\ns the the president president meets meets the\nMotivation Syntactic LM(...TRUNCATED) | "EAAAAA EEAAAA EEIAAA EEIIAA\ns the the president president meets meets the\nMotivation Syntactic LM(...TRUNCATED) | [] |
GEM-SciDuet-train-1#paper-954#slide-7 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | "This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machi(...TRUNCATED) | {"paper_content_id":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29(...TRUNCATED) | {"paper_header_number":["1","2","3","3.1","3.3","4","4.1","6","7"],"paper_header_content":["Introduc(...TRUNCATED) | GEM-SciDuet-train-1#paper-954#slide-7 | Direct Maximum Entropy Model of Translation | "e argmax exp jhj(e,f)\nh Distortion model n-gram LM\nSet of j feature weights\nSyntactic LM P( th)\(...TRUNCATED) | "e argmax exp jhj(e,f)\nh Distortion model n-gram LM\nSet of j feature weights\nSyntactic LM P( th)\(...TRUNCATED) | [] |
GEM-SciDuet-train-1#paper-954#slide-8 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | "This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machi(...TRUNCATED) | {"paper_content_id":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29(...TRUNCATED) | {"paper_header_number":["1","2","3","3.1","3.3","4","4.1","6","7"],"paper_header_content":["Introduc(...TRUNCATED) | GEM-SciDuet-train-1#paper-954#slide-8 | Does an Incremental Syntactic LM Help Translation | "but will it make my BLEU score go up?\nMotivation Syntactic LM Decoder Integration Questions?\nMose(...TRUNCATED) | "but will it make my BLEU score go up?\nMotivation Syntactic LM Decoder Integration Questions?\nMose(...TRUNCATED) | [] |
GEM-SciDuet-train-1#paper-954#slide-9 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | "This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machi(...TRUNCATED) | {"paper_content_id":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29(...TRUNCATED) | {"paper_header_number":["1","2","3","3.1","3.3","4","4.1","6","7"],"paper_header_content":["Introduc(...TRUNCATED) | GEM-SciDuet-train-1#paper-954#slide-9 | Perplexity Results | "Language models trained on WSJ Treebank corpus\nMotivation Syntactic LM Decoder Integration Questio(...TRUNCATED) | "Language models trained on WSJ Treebank corpus\nMotivation Syntactic LM Decoder Integration Questio(...TRUNCATED) | [] |
Dataset Card for GEM/SciDuet
Link to Main Data Card
You can find the main data card on the GEM Website.
Dataset Summary
This dataset supports the document-to-slide generation task where a model has to generate presentation slide content from the text of a document.
You can load the dataset via:
import datasets
data = datasets.load_dataset('GEM/SciDuet')
The data loader can be found here.
website
paper
authors
Edward Sun, Yufang Hou, Dakuo Wang, Yunfeng Zhang, Nancy Wang
Dataset Overview
Where to find the Data and its Documentation
Webpage
Download
Paper
BibTex
@inproceedings{sun-etal-2021-d2s,
title = "{D}2{S}: Document-to-Slide Generation Via Query-Based Text Summarization",
author = "Sun, Edward and
Hou, Yufang and
Wang, Dakuo and
Zhang, Yunfeng and
Wang, Nancy X. R.",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.111",
doi = "10.18653/v1/2021.naacl-main.111",
pages = "1405--1418",
abstract = "Presentations are critical for communication in all areas of our lives, yet the creation of slide decks is often tedious and time-consuming. There has been limited research aiming to automate the document-to-slides generation process and all face a critical challenge: no publicly available dataset for training and benchmarking. In this work, we first contribute a new dataset, SciDuet, consisting of pairs of papers and their corresponding slides decks from recent years{'} NLP and ML conferences (e.g., ACL). Secondly, we present D2S, a novel system that tackles the document-to-slides task with a two-step approach: 1) Use slide titles to retrieve relevant and engaging text, figures, and tables; 2) Summarize the retrieved context into bullet points with long-form question answering. Our evaluation suggests that long-form QA outperforms state-of-the-art summarization baselines on both automated ROUGE metrics and qualitative human evaluation.",
}
Has a Leaderboard?
no
Languages and Intended Use
Multilingual?
no
Covered Languages
English
License
apache-2.0: Apache License 2.0
Intended Use
Promote research on the task of document-to-slides generation
Primary Task
Text-to-Slide
Credit
Curation Organization Type(s)
industry
Curation Organization(s)
IBM Research
Dataset Creators
Edward Sun, Yufang Hou, Dakuo Wang, Yunfeng Zhang, Nancy Wang
Funding
IBM Research
Who added the Dataset to GEM?
Yufang Hou (IBM Research), Dakuo Wang (IBM Research)
Dataset Structure
How were labels chosen?
The original papers and slides (both are in PDF format) are carefully processed by a combination of PDF/Image processing tookits. The text contents from multiple slides that correspond to the same slide title are mreged.
Data Splits
Training, validation and testing data contain 136, 55, and 81 papers from ACL Anthology and their corresponding slides, respectively.
Splitting Criteria
The dataset integrated into GEM is the ACL portion of the whole dataset described in the paper, It contains the full Dev and Test sets, and a portion of the Train dataset. Note that although we cannot release the whole training dataset due to copyright issues, researchers can still use our released data procurement code to generate the training dataset from the online ICML/NeurIPS anthologies.
Dataset in GEM
Rationale for Inclusion in GEM
Why is the Dataset in GEM?
SciDuet is the first publicaly available dataset for the challenging task of document2slides generation, which requires a model has a good ability to "understand" long-form text, choose appropriate content and generate key points.
Similar Datasets
no
Ability that the Dataset measures
content selection, long-form text undersanding and generation
GEM-Specific Curation
Modificatied for GEM?
no
Additional Splits?
no
Getting Started with the Task
Previous Results
Previous Results
Measured Model Abilities
content selection, long-form text undersanding and key points generation
Metrics
ROUGE
Proposed Evaluation
Automatical Evaluation Metric: ROUGE Human Evaluation: (Readability, Informativeness, Consistency)
- Readability: The generated slide content is coherent, concise, and grammatically correct;
- Informativeness: The generated slide provides sufficient and necessary information that corresponds to the given slide title, regardless of its similarity to the original slide;
- Consistency: The generated slide content is similar to the original author’s reference slide.
Previous results available?
yes
Other Evaluation Approaches
ROUGE + Human Evaluation
Relevant Previous Results
Paper "D2S: Document-to-Slide Generation Via Query-Based Text Summarization" reports 20.47, 5.26 and 19.08 for ROUGE-1, ROUGE-2 and ROUGE-L (f-score).
Dataset Curation
Original Curation
Original Curation Rationale
Provide a benchmark dataset for the document-to-slides task.
Sourced from Different Sources
no
Language Data
How was Language Data Obtained?
Other
Data Validation
not validated
Data Preprocessing
Text on papers was extracted through Grobid. Figures andcaptions were extracted through pdffigures. Text on slides was extracted through IBM Watson Discovery package and OCR by pytesseract. Figures and tables that appear on slides and papers were linked through multiscale template matching by OpenCV. Further dataset cleaning was performed with standard string-based heuristics on sentence building, equation and floating caption removal, and duplicate line deletion.
Was Data Filtered?
algorithmically
Filter Criteria
the slide context text shouldn't contain additional format information such as "*** University"
Structured Annotations
Additional Annotations?
none
Annotation Service?
no
Consent
Any Consent Policy?
yes
Consent Policy Details
The original dataset was open-sourced under Apache-2.0.
Some of the original dataset creators are part of the GEM v2 dataset infrastructure team and take care of integrating this dataset into GEM.
Private Identifying Information (PII)
Contains PII?
yes/very likely
Categories of PII
generic PII
Any PII Identification?
no identification
Maintenance
Any Maintenance Plan?
no
Broader Social Context
Previous Work on the Social Impact of the Dataset
Usage of Models based on the Data
no
Impact on Under-Served Communities
Addresses needs of underserved Communities?
no
Discussion of Biases
Any Documented Social Biases?
unsure
Considerations for Using the Data
PII Risks and Liability
Licenses
Copyright Restrictions on the Dataset
non-commercial use only
Copyright Restrictions on the Language Data
research use only
Known Technical Limitations
- Downloads last month
- 73