Datasets:
Updated readme
Browse files
README.md
CHANGED
|
@@ -1,12 +1,16 @@
|
|
| 1 |
This is the processed version of [Google's C4 dataset](https://www.tensorflow.org/datasets/catalog/c4).
|
| 2 |
|
| 3 |
-
We prepared
|
| 4 |
|
| 5 |
For reference, these are the sizes of the variants:
|
| 6 |
|
| 7 |
-
- `en`:
|
| 8 |
- `en.noclean`: 2.3TB
|
|
|
|
| 9 |
- `realnewslike`: 15GB
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
# How do I download this?
|
| 12 |
|
|
@@ -16,17 +20,24 @@ Unfortunately we ran out of time making this into a proper Huggingface dataset,
|
|
| 16 |
git clone https://huggingface.co/datasets/allenai/c4
|
| 17 |
```
|
| 18 |
|
| 19 |
-
If you want
|
| 20 |
|
| 21 |
```bash
|
| 22 |
-
git clone
|
| 23 |
cd c4
|
| 24 |
-
git
|
| 25 |
-
git sparse-checkout set en
|
| 26 |
```
|
| 27 |
|
| 28 |
-
You can
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
# Acknowledgements
|
| 31 |
|
| 32 |
Big ups to the good folks at [Common Crawl](https://commoncrawl.org) whose data made this possible ([consider donating](http://commoncrawl.org/donate/)!), to Google for creating the code that curates and filters the data, and to Huggingface, who had no issue with hosting these 3TB of data for public download!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
This is the processed version of [Google's C4 dataset](https://www.tensorflow.org/datasets/catalog/c4).
|
| 2 |
|
| 3 |
+
We prepared five variants of the data: `en`, `en.noclean`, `en.noblocklist`, `realnewslike`, and `multilingual`.
|
| 4 |
|
| 5 |
For reference, these are the sizes of the variants:
|
| 6 |
|
| 7 |
+
- `en`: 305GB
|
| 8 |
- `en.noclean`: 2.3TB
|
| 9 |
+
- `en.noblocklist`: 380GB
|
| 10 |
- `realnewslike`: 15GB
|
| 11 |
+
- `multilingual`: 9.7TB
|
| 12 |
+
|
| 13 |
+
The `en.noblocklist` variant is exactly the same as the `en` variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words.
|
| 14 |
|
| 15 |
# How do I download this?
|
| 16 |
|
|
|
|
| 20 |
git clone https://huggingface.co/datasets/allenai/c4
|
| 21 |
```
|
| 22 |
|
| 23 |
+
This will download 13TB to your local drive. If you want to be more precise with what you are downloading, follow these commands instead:
|
| 24 |
|
| 25 |
```bash
|
| 26 |
+
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/allenai/c4
|
| 27 |
cd c4
|
| 28 |
+
git lfs pull --include "en/*"
|
|
|
|
| 29 |
```
|
| 30 |
|
| 31 |
+
The `git clone` command in this variant will download a bunch of stub files that Git LFS uses, so you can see all the filenames that exist that way. You can then convert the stubs into their real files with `git lfs pull --include "..."`. For example, if you wanted all the Dutch documents from the multilingual set, you would run
|
| 32 |
+
|
| 33 |
+
```bash
|
| 34 |
+
git lfs pull --include "multilingual/c4-nl.*.json.gz"
|
| 35 |
+
```
|
| 36 |
|
| 37 |
# Acknowledgements
|
| 38 |
|
| 39 |
Big ups to the good folks at [Common Crawl](https://commoncrawl.org) whose data made this possible ([consider donating](http://commoncrawl.org/donate/)!), to Google for creating the code that curates and filters the data, and to Huggingface, who had no issue with hosting these 3TB of data for public download!
|
| 40 |
+
|
| 41 |
+
### License
|
| 42 |
+
|
| 43 |
+
We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound by the [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/) in respect of the content contained in the dataset.
|