system HF staff commited on
Commit
4788d75
1 Parent(s): 4c1f667

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +184 -0
README.md ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "newsroom"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://summari.es](https://summari.es)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 0.00 MB
37
+ - **Size of the generated dataset:** 5057.49 MB
38
+ - **Total amount of disk used:** 5057.49 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ NEWSROOM is a large dataset for training and evaluating summarization systems.
43
+ It contains 1.3 million articles and summaries written by authors and
44
+ editors in the newsrooms of 38 major publications.
45
+
46
+ Dataset features includes:
47
+ - text: Input news text.
48
+ - summary: Summary for the news.
49
+ And additional features:
50
+ - title: news title.
51
+ - url: url of the news.
52
+ - date: date of the article.
53
+ - density: extractive density.
54
+ - coverage: extractive coverage.
55
+ - compression: compression ratio.
56
+ - density_bin: low, medium, high.
57
+ - coverage_bin: extractive, abstractive.
58
+ - compression_bin: low, medium, high.
59
+
60
+ This dataset can be downloaded upon requests. Unzip all the contents
61
+ "train.jsonl, dev.josnl, test.jsonl" to the tfds folder.
62
+
63
+ ### [Supported Tasks](#supported-tasks)
64
+
65
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
66
+
67
+ ### [Languages](#languages)
68
+
69
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
70
+
71
+ ## [Dataset Structure](#dataset-structure)
72
+
73
+ We show detailed information for up to 5 configurations of the dataset.
74
+
75
+ ### [Data Instances](#data-instances)
76
+
77
+ #### default
78
+
79
+ - **Size of downloaded dataset files:** 0.00 MB
80
+ - **Size of the generated dataset:** 5057.49 MB
81
+ - **Total amount of disk used:** 5057.49 MB
82
+
83
+ An example of 'train' looks as follows.
84
+ ```
85
+ {
86
+ "compression": 33.880001068115234,
87
+ "compression_bin": "medium",
88
+ "coverage": 1.0,
89
+ "coverage_bin": "high",
90
+ "date": "200600000",
91
+ "density": 11.720000267028809,
92
+ "density_bin": "extractive",
93
+ "summary": "some summary 1",
94
+ "text": "some text 1",
95
+ "title": "news title 1",
96
+ "url": "url.html"
97
+ }
98
+ ```
99
+
100
+ ### [Data Fields](#data-fields)
101
+
102
+ The data fields are the same among all splits.
103
+
104
+ #### default
105
+ - `text`: a `string` feature.
106
+ - `summary`: a `string` feature.
107
+ - `title`: a `string` feature.
108
+ - `url`: a `string` feature.
109
+ - `date`: a `string` feature.
110
+ - `density_bin`: a `string` feature.
111
+ - `coverage_bin`: a `string` feature.
112
+ - `compression_bin`: a `string` feature.
113
+ - `density`: a `float32` feature.
114
+ - `coverage`: a `float32` feature.
115
+ - `compression`: a `float32` feature.
116
+
117
+ ### [Data Splits Sample Size](#data-splits-sample-size)
118
+
119
+ | name |train |validation| test |
120
+ |-------|-----:|---------:|-----:|
121
+ |default|995041| 108837|108862|
122
+
123
+ ## [Dataset Creation](#dataset-creation)
124
+
125
+ ### [Curation Rationale](#curation-rationale)
126
+
127
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
+
129
+ ### [Source Data](#source-data)
130
+
131
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
132
+
133
+ ### [Annotations](#annotations)
134
+
135
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
136
+
137
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
138
+
139
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
140
+
141
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
142
+
143
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
144
+
145
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
146
+
147
+ ### [Discussion of Biases](#discussion-of-biases)
148
+
149
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
150
+
151
+ ### [Other Known Limitations](#other-known-limitations)
152
+
153
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
154
+
155
+ ## [Additional Information](#additional-information)
156
+
157
+ ### [Dataset Curators](#dataset-curators)
158
+
159
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
160
+
161
+ ### [Licensing Information](#licensing-information)
162
+
163
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
164
+
165
+ ### [Citation Information](#citation-information)
166
+
167
+ ```
168
+
169
+ @inproceedings{N18-1065,
170
+ author = {Grusky, Max and Naaman, Mor and Artzi, Yoav},
171
+ title = {NEWSROOM: A Dataset of 1.3 Million Summaries
172
+ with Diverse Extractive Strategies},
173
+ booktitle = {Proceedings of the 2018 Conference of the
174
+ North American Chapter of the Association for
175
+ Computational Linguistics: Human Language Technologies},
176
+ year = {2018},
177
+ }
178
+
179
+ ```
180
+
181
+
182
+ ### Contributions
183
+
184
+ Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@yoavartzi](https://github.com/yoavartzi), [@thomwolf](https://github.com/thomwolf) for adding this dataset.