Update README.md
Browse files
README.md
CHANGED
|
@@ -114,9 +114,9 @@ The following fields were extracted and/or transformed from the original source:
|
|
| 114 |
Each article is naturally isolated and has its own `cid` value. But in case the article is too long, we have to split it in several chunks.
|
| 115 |
The Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks, which correspond to the `text` value. The parameters used are :
|
| 116 |
|
| 117 |
-
- `chunk_size` =
|
| 118 |
-
- `chunk_overlap` =
|
| 119 |
-
- `length_function` =
|
| 120 |
|
| 121 |
For each chunk (`text`), a `chunk_text` is constructed as follows:
|
| 122 |
|
|
@@ -133,7 +133,7 @@ The resulting embedding vector is stored in the `embeddings_bge-m3` column as a
|
|
| 133 |
|
| 134 |
## 🔄 The chunking doesn't fit your use case?
|
| 135 |
|
| 136 |
-
|
| 137 |
|
| 138 |
⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
|
| 139 |
|
|
|
|
| 114 |
Each article is naturally isolated and has its own `cid` value. But in case the article is too long, we have to split it in several chunks.
|
| 115 |
The Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks, which correspond to the `text` value. The parameters used are :
|
| 116 |
|
| 117 |
+
- `chunk_size` = 1024
|
| 118 |
+
- `chunk_overlap` = 0
|
| 119 |
+
- `length_function` = bge_m3_tokenizer
|
| 120 |
|
| 121 |
For each chunk (`text`), a `chunk_text` is constructed as follows:
|
| 122 |
|
|
|
|
| 133 |
|
| 134 |
## 🔄 The chunking doesn't fit your use case?
|
| 135 |
|
| 136 |
+
If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).
|
| 137 |
|
| 138 |
⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
|
| 139 |
|