BUT-LCC / README.md
mfajcik's picture
Update README.md
620a21a verified
---
task_categories:
- text-generation
- fill-mask
language:
- cs
pretty_name: BUT-LCC
size_categories:
- 10B<n<100B
extra_gated_prompt: "By completing the form below, you acknowledge that the provided data is offered as is. Although we anticipate no problems, you accept full responsibility for any repercussions resulting from the use of this data. Furthermore, you agree that the data must not be utilized for malicious or harmful purposes towards humanity."
extra_gated_fields:
Name: text
Email: text
Affiliation: text
Country: text
Usecase: text
I have explicitly check with my jurisdiction and I confirm that downloading BUT-LCC is legal in the country/region where I am located right now, and for the use case that I have described above: checkbox
You agree to not attempt to determine the identity of individuals in this dataset: checkbox
---
# BUT-LCC Corpus
BUT-LCC (Brno University of Technology Large Czech Collection) is a corpus of Czech texts. It was cleaned using exact deduplication, fuzzy deduplication (using minhashlsh), n-gram language model, and an SVM classifier that filters inappropriate content (we manually labelled).
# <span style="color:blue">Latest Updates</span>
- 06/05/2024 We released small manually annotated [dataset of adult content](https://huggingface.co/datasets/BUT-FIT/adult_content_classifier_dataset). We used classifier trained on this dataset for filtering our corpus.
## Data Sources
<table>
<thead>
<tr>
<th>Part</th>
<th>GB of text</th>
<th>GB of titles</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr>
<td>CulturaX</td>
<td>157.79</td>
<td>3.85</td>
<td>49</td>
</tr>
<tr>
<td>TenTen-cs-2017</td>
<td>48.97</td>
<td>0.95</td>
<td>15</td>
</tr>
<tr>
<td>BUT_Crawl</td>
<td>25.15</td>
<td>0.8</td>
<td>8</td>
</tr>
<tr>
<td>cswiki-20230101</td>
<td>1.05</td>
<td>0.01</td>
<td>0</td>
</tr>
<tr>
<td>historical</td>
<td>13.47</td>
<td>0.00</td>
<td>4</td>
</tr>
<tr>
<td>hplt</td>
<td>65.55</td>
<td>3.20</td>
<td>21</td>
</tr>
<tr>
<td>idnes_comments</td>
<td>7.38</td>
<td>0.03</td>
<td>2</td>
</tr>
</tbody>
<tfoot>
<tr>
<td><b>Sum</b></td>
<td><b>319.36</b></td>
<td><b>8.84</b></td>
<td></td>
</tr>
</tfoot>
</table>
## Format
The corpus consists of train and test splits. It uses jsonl format, which means that every sample is JSON on its own line.
### Sample Format
```json
{
"id": unique identifier,
"part": original source,
"title": source document title,
"text": the context,
"ugly": (type: bool) inappropriate content,
"ugly_score": (type: float) score from SVM classifier that filters inappropriate content
}
```
# License Information
- We do not own any of the text from which these text data has been extracted.
- We license the actual packaging of these text data under the Creative Commons CC0 license ("no rights reserved").
Detailed licensing information for contained corpora (not crawled by us) is below.
| Corpus | Licensing Information|
|-----------------|----------------|
| CulturaX | [uonlp/CulturaX](https://huggingface.co/datasets/uonlp/CulturaX#license-information) |
| TenTen-cs-2017 | [NLP Centre Web Corpus License Agreement](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-4835) |
| Czech Wikipedia | [CC BY-SA 4.0 DEED](https://creativecommons.org/licenses/by-sa/4.0/deed.en) |
| Historical | OCR'd documents since 1850, publicly available from the [Czech Digital Library](https://www.digitalniknihovna.cz/) |
| HPLT | [https://hplt-project.org/datasets/v1.2](https://hplt-project.org/datasets/v1.2) |
## Our Models Linked to This Dataset
- [BUT-FIT/CSMPT7B](https://huggingface.co/BUT-FIT/csmpt7b)
- [BUT-FIT/CSTinyLlama-1.2B](https://huggingface.co/BUT-FIT/CSTinyLlama-1.2B)
- [BUT-FIT/Czech-GPT-2-XL-133k](https://huggingface.co/BUT-FIT/Czech-GPT-2-XL-133k)
## Statistics
<table>
<thead>
<tr>
<th>Split</th>
<th>Samples</th>
</tr>
</thead>
<tbody>
<tr>
<td>Train</td>
<td>176 780 582</td>
</tr>
<tr>
<td>Test</td>
<td>20 000</td>
</tr>
</tbody>
</table>
## ID 2 URL mapping
If you need to recover original webpages, we provide ID to source URL mapping where possible in id2url.csv file.
# Acknowledgement
This work was supported by NAKI III program of Ministry of Culture Czech Republic, project semANT ---
"Sémantický průzkumník textového kulturního dědictví" grant no. `DH23P03OVV060` and
by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:`90254`).
# Contributors
- [Jan Doležal](https://www.fit.vut.cz/person/idolezal/.en) developed cleaning pipeline for text processing, collected data for cleaning, and analyzed cutoff threshold for pruning.
- [Martin Dočkal](https://www.fit.vut.cz/person/idocekal/.en) uploaded data to Huggingface, and helped with cutoff analysis.
- [Martin Fajčík](https://mfajcik.github.io/) reviewed existing corpora, planned pipeline steps, processed data for LM training, and verified their usefullness.
- [Martin Kišš](https://www.fit.vut.cz/person/ikiss/.en) downloaded historical documents, and ran our PeroOCR on the collection.
- [Karel Beneš](https://www.fit.vut.cz/person/ibenes/.en) performed cleaning of historical documents, and created n-gram lm for document filtering.
- [Karel Ondřej](https://www.fit.vut.cz/person/ondrej/.en) who wrote a crawler for collecting BUT_Crawl and prepared preliminary clean corpus version.
- [Michal Hradiš](https://www.fit.vut.cz/person/ihradis/.en) managed the work, and pushed the members when necessary.