update readme
Browse files
README.md
CHANGED
|
@@ -8,7 +8,32 @@ A large-scale dataset for cross-document event coreference **search**</br>
|
|
| 8 |
English
|
| 9 |
|
| 10 |
## Load Dataset
|
| 11 |
-
You can read/download the dataset files following Huggingface Hub instructions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
## Citation
|
| 14 |
```
|
|
|
|
| 8 |
English
|
| 9 |
|
| 10 |
## Load Dataset
|
| 11 |
+
You can read/download the dataset files following Huggingface Hub instructions.</br>
|
| 12 |
+
For example, below code will load CoreSearch DPR folder:
|
| 13 |
+
|
| 14 |
+
```python
|
| 15 |
+
from huggingface_hub import hf_hub_url, cached_download
|
| 16 |
+
import json
|
| 17 |
+
REPO_ID = "datasets/Intel/CoreSearch"
|
| 18 |
+
DPR_FILES = "/dpr/"
|
| 19 |
+
|
| 20 |
+
dpr_files = ["dpr/Dev.json", "dpr/Train.json", "dpr/Test.json"]
|
| 21 |
+
|
| 22 |
+
dpr_jsons = list()
|
| 23 |
+
for _file in dpr_files:
|
| 24 |
+
dpr_jsons.append(json.load(open(cached_download(
|
| 25 |
+
hf_hub_url(REPO_ID, _file)), "r")))
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
### Data Splits
|
| 29 |
+
- **Final version of the CD event coreference search dataset**<br>
|
| 30 |
+
| | Train | Valid | Test | Total |
|
| 31 |
+
| ----- | ------ | ----- | ---- | ---- |
|
| 32 |
+
| WEC-Eng Validated Data | | | | |
|
| 33 |
+
| # Clusters | 237 | 49 | 236 | 522 |
|
| 34 |
+
| # Passages (with Mentions) | 1,503 | 341 | 1,266 | 3,110 |
|
| 35 |
+
| # Added Destructor Passages | 922,736 | 923,376 | 923,746 | 2,769,858 |
|
| 36 |
+
| # Total Passages | 924,239 | 923,717 | 925,012 | 2,772,968 |
|
| 37 |
|
| 38 |
## Citation
|
| 39 |
```
|