Neal Caren commited on
Commit
ab6c0ca
·
0 Parent(s):

Add OCR scripts collection with fixed deepseek-ocr-vllm dependencies

Browse files
Files changed (12) hide show
  1. README.md +391 -0
  2. deepseek-ocr-vllm.py +692 -0
  3. deepseek-ocr.py +604 -0
  4. dots-ocr.py +553 -0
  5. lighton-ocr.py +639 -0
  6. nanonets-ocr.py +507 -0
  7. nanonets-ocr2.py +514 -0
  8. numarkdown-ocr.py +683 -0
  9. olmocr2-vllm.py +636 -0
  10. paddleocr-vl.py +699 -0
  11. rolm-ocr.py +517 -0
  12. smoldocling-ocr.py +580 -0
README.md ADDED
@@ -0,0 +1,391 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ viewer: false
3
+ tags: [uv-script, ocr, vision-language-model, document-processing]
4
+ ---
5
+
6
+ # OCR UV Scripts
7
+
8
+ > Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV
9
+
10
+ Ready-to-run OCR scripts that work with `uv run` - no setup required!
11
+
12
+ ## 🚀 Quick Start with HuggingFace Jobs
13
+
14
+ Run OCR on any dataset without needing your own GPU:
15
+
16
+ ```bash
17
+ # Quick test with 10 samples
18
+ hf jobs uv run --flavor l4x1 \
19
+ --secrets HF_TOKEN \
20
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
21
+ your-input-dataset your-output-dataset \
22
+ --max-samples 10
23
+ ```
24
+
25
+ That's it! The script will:
26
+
27
+ - ✅ Process first 10 images from your dataset
28
+ - ✅ Add OCR results as a new `markdown` column
29
+ - ✅ Push the results to a new dataset
30
+ - 📊 View results at: `https://huggingface.co/datasets/[your-output-dataset]`
31
+
32
+ ## 📋 Available Scripts
33
+
34
+ ### LightOnOCR (`lighton-ocr.py`) ⚡ Good one to test first since it's small and fast!
35
+
36
+ Fast and compact OCR using [lightonai/LightOnOCR-1B-1025](https://huggingface.co/lightonai/LightOnOCR-1B-1025):
37
+
38
+ - ⚡ **Fastest**: 5.71 pages/sec on H100, ~6.25 images/sec on A100 with batch_size=4096
39
+ - 🎯 **Compact**: Only 1B parameters - quick to download and initialize
40
+ - 🌍 **Multilingual**: 3 vocabulary sizes for different use cases
41
+ - 📐 **LaTeX formulas**: Mathematical notation in LaTeX format
42
+ - 📊 **Table extraction**: Markdown table format
43
+ - 📝 **Document structure**: Preserves hierarchy and layout
44
+ - 🚀 **Production-ready**: 76.1% benchmark score, used in production
45
+
46
+ **Vocabulary sizes:**
47
+ - `151k`: Full vocabulary, all languages (default)
48
+ - `32k`: European languages, ~12% faster decoding
49
+ - `16k`: European languages, ~12% faster decoding
50
+
51
+ **Quick start:**
52
+ ```bash
53
+ # Test on 100 samples with English text (32k vocab is fastest for European languages)
54
+ hf jobs uv run --flavor l4x1 \
55
+ -s HF_TOKEN \
56
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr.py \
57
+ your-input-dataset your-output-dataset \
58
+ --vocab-size 32k \
59
+ --batch-size 32 \
60
+ --max-samples 100
61
+
62
+ # Full production run on A100 (can handle huge batches!)
63
+ hf jobs uv run --flavor a100-large \
64
+ -s HF_TOKEN \
65
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr.py \
66
+ your-input-dataset your-output-dataset \
67
+ --vocab-size 32k \
68
+ --batch-size 4096 \
69
+ --temperature 0.0
70
+ ```
71
+
72
+ ### DeepSeek-OCR (`deepseek-ocr-vllm.py`) ⭐ NEW
73
+
74
+ Advanced document OCR using [deepseek-ai/DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) with visual-text compression:
75
+
76
+ - 📐 **LaTeX equations** - Mathematical formulas in LaTeX format
77
+ - 📊 **Tables** - Extracted as HTML/markdown
78
+ - 📝 **Document structure** - Headers, lists, formatting preserved
79
+ - 🖼️ **Image grounding** - Spatial layout with bounding boxes
80
+ - 🔍 **Complex layouts** - Multi-column and hierarchical structures
81
+ - 🌍 **Multilingual** - Multiple language support
82
+ - 🎚️ **Resolution modes** - 5 presets for speed/quality trade-offs
83
+ - 💬 **Prompt modes** - 5 presets for different OCR tasks
84
+ - ⚡ **Fast batch processing** - vLLM acceleration
85
+
86
+ **Resolution Modes:**
87
+ - `tiny` (512×512): Fast, 64 vision tokens
88
+ - `small` (640×640): Balanced, 100 vision tokens
89
+ - `base` (1024×1024): High quality, 256 vision tokens
90
+ - `large` (1280×1280): Maximum quality, 400 vision tokens
91
+ - `gundam` (dynamic): Adaptive multi-tile (default)
92
+
93
+ **Prompt Modes:**
94
+ - `document`: Convert to markdown with grounding (default)
95
+ - `image`: OCR any image with grounding
96
+ - `free`: Fast OCR without layout
97
+ - `figure`: Parse figures from documents
98
+ - `describe`: Detailed image descriptions
99
+
100
+ ### RolmOCR (`rolm-ocr.py`)
101
+
102
+ Fast general-purpose OCR using [reducto/RolmOCR](https://huggingface.co/reducto/RolmOCR) based on Qwen2.5-VL-7B:
103
+
104
+ - 🚀 **Fast extraction** - Optimized for speed and efficiency
105
+ - 📄 **Plain text output** - Clean, natural text representation
106
+ - 💪 **General-purpose** - Works well on various document types
107
+ - 🔥 **Large context** - Handles up to 16K tokens
108
+ - ⚡ **Batch optimized** - Efficient processing with vLLM
109
+
110
+ ### Nanonets OCR (`nanonets-ocr.py`)
111
+
112
+ State-of-the-art document OCR using [nanonets/Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) that handles:
113
+
114
+ - 📐 **LaTeX equations** - Mathematical formulas preserved
115
+ - 📊 **Tables** - Extracted as HTML format
116
+ - 📝 **Document structure** - Headers, lists, formatting maintained
117
+ - 🖼️ **Images** - Captions and descriptions included
118
+ - ☑️ **Forms** - Checkboxes rendered as ☐/☑
119
+
120
+ ### Nanonets OCR2 (`nanonets-ocr2.py`)
121
+
122
+ Next-generation Nanonets OCR using [nanonets/Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-3B) with improved accuracy:
123
+
124
+ - 🎯 **Enhanced quality** - 3.75B parameters for superior OCR accuracy
125
+ - 📐 **LaTeX equations** - Mathematical formulas preserved in LaTeX format
126
+ - 📊 **Advanced tables** - Improved HTML table extraction
127
+ - 📝 **Document structure** - Headers, lists, formatting maintained
128
+ - 🖼️ **Smart image captions** - Intelligent descriptions and captions
129
+ - ☑️ **Forms** - Checkboxes rendered as ☐/☑
130
+ - 🌍 **Multilingual** - Enhanced language support
131
+ - 🔧 **Based on Qwen2.5-VL** - Built on state-of-the-art vision-language model
132
+
133
+ ### SmolDocling (`smoldocling-ocr.py`)
134
+
135
+ Ultra-compact document understanding using [ds4sd/SmolDocling-256M-preview](https://huggingface.co/ds4sd/SmolDocling-256M-preview) with only 256M parameters:
136
+
137
+ - 🏷️ **DocTags format** - Efficient XML-like representation
138
+ - 💻 **Code blocks** - Preserves indentation and syntax
139
+ - 🔢 **Formulas** - Mathematical expressions with layout
140
+ - 📊 **Tables & charts** - Structured data extraction
141
+ - 📐 **Layout preservation** - Bounding boxes and spatial info
142
+ - ⚡ **Ultra-fast** - Tiny model size for quick inference
143
+
144
+ ### NuMarkdown (`numarkdown-ocr.py`)
145
+
146
+ Advanced reasoning-based OCR using [numind/NuMarkdown-8B-Thinking](https://huggingface.co/numind/NuMarkdown-8B-Thinking) that analyzes documents before converting to markdown:
147
+
148
+ - 🧠 **Reasoning Process** - Thinks through document layout before generation
149
+ - 📊 **Complex Tables** - Superior table extraction and formatting
150
+ - 📐 **Mathematical Formulas** - Accurate LaTeX/math notation preservation
151
+ - 🔍 **Multi-column Layouts** - Handles complex document structures
152
+ - ✨ **Thinking Traces** - Optional inclusion of reasoning process with `--include-thinking`
153
+
154
+ ### DoTS.ocr (`dots-ocr.py`)
155
+
156
+ Compact multilingual OCR using [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr) with only 1.7B parameters:
157
+
158
+ - 🌍 **100+ Languages** - Extensive multilingual support
159
+ - 📝 **Simple OCR** - Clean text extraction (default mode)
160
+ - 📊 **Layout Analysis** - Optional structured output with bboxes and categories
161
+ - 📐 **Formula recognition** - LaTeX format support
162
+ - 🎯 **Compact** - Only 1.7B parameters, efficient on smaller GPUs
163
+ - 🔀 **Flexible prompts** - Switch between OCR, layout-all, and layout-only modes
164
+
165
+ ### olmOCR2 (`olmocr2-vllm.py`)
166
+
167
+ High-quality document OCR using [allenai/olmOCR-2-7B-1025-FP8](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) optimized with GRPO reinforcement learning:
168
+
169
+ - 🎯 **High accuracy** - 82.4 ± 1.1 on olmOCR-Bench (84.9% on math)
170
+ - 📐 **LaTeX equations** - Mathematical formulas in LaTeX format
171
+ - 📊 **Table extraction** - Structured table recognition
172
+ - 📑 **Multi-column layouts** - Complex document structures
173
+ - 🗜️ **FP8 quantized** - Efficient 8B model for faster inference
174
+ - 📜 **Degraded scans** - Works well on old/historical documents
175
+ - 📝 **Long text extraction** - Headers, footers, and full document content
176
+ - 🧩 **YAML metadata** - Structured front matter (language, rotation, content type)
177
+ - 🚀 **Based on Qwen2.5-VL-7B** - Fine-tuned with reinforcement learning
178
+
179
+
180
+ ## 🆕 New Features
181
+
182
+ ### Multi-Model Comparison Support
183
+
184
+ All scripts now include `inference_info` tracking for comparing multiple OCR models:
185
+
186
+ ```bash
187
+ # First model
188
+ uv run rolm-ocr.py my-dataset my-dataset --max-samples 100
189
+
190
+ # Second model (appends to same dataset)
191
+ uv run nanonets-ocr.py my-dataset my-dataset --max-samples 100
192
+
193
+ # View all models used
194
+ python -c "import json; from datasets import load_dataset; ds = load_dataset('my-dataset'); print(json.loads(ds[0]['inference_info']))"
195
+ ```
196
+
197
+ ### Random Sampling
198
+
199
+ Get representative samples with the new `--shuffle` flag:
200
+
201
+ ```bash
202
+ # Random 50 samples instead of first 50
203
+ uv run rolm-ocr.py ordered-dataset output --max-samples 50 --shuffle
204
+
205
+ # Reproducible random sampling
206
+ uv run nanonets-ocr.py dataset output --max-samples 100 --shuffle --seed 42
207
+ ```
208
+
209
+ ### Automatic Dataset Cards
210
+
211
+ Every OCR run now generates comprehensive dataset documentation including:
212
+ - Model configuration and parameters
213
+ - Processing statistics
214
+ - Column descriptions
215
+ - Reproduction instructions
216
+
217
+ ## 💻 Usage Examples
218
+
219
+ ### Run on HuggingFace Jobs (Recommended)
220
+
221
+ No GPU? No problem! Run on HF infrastructure:
222
+
223
+ ```bash
224
+ # DeepSeek-OCR - Real-world example (National Library of Scotland handbooks)
225
+ hf jobs uv run --flavor a100-large \
226
+ -s HF_TOKEN \
227
+ -e UV_TORCH_BACKEND=auto \
228
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \
229
+ NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset \
230
+ davanstrien/handbooks-deep-ocr \
231
+ --max-samples 100 \
232
+ --shuffle \
233
+ --resolution-mode large
234
+
235
+ # DeepSeek-OCR - Fast testing with tiny mode
236
+ hf jobs uv run --flavor l4x1 \
237
+ -s HF_TOKEN \
238
+ -e UV_TORCH_BACKEND=auto \
239
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \
240
+ your-input-dataset your-output-dataset \
241
+ --max-samples 10 \
242
+ --resolution-mode tiny
243
+
244
+ # DeepSeek-OCR - Parse figures from scientific papers
245
+ hf jobs uv run --flavor a100-large \
246
+ -s HF_TOKEN \
247
+ -e UV_TORCH_BACKEND=auto \
248
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \
249
+ scientific-papers figures-extracted \
250
+ --prompt-mode figure
251
+
252
+ # Basic OCR job with Nanonets
253
+ hf jobs uv run --flavor l4x1 \
254
+ --secrets HF_TOKEN \
255
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
256
+ your-input-dataset your-output-dataset
257
+
258
+ # DoTS.ocr - Multilingual OCR with compact 1.7B model
259
+ hf jobs uv run --flavor a100-large \
260
+ --secrets HF_TOKEN \
261
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr.py \
262
+ davanstrien/ufo-ColPali \
263
+ your-username/ufo-ocr \
264
+ --batch-size 256 \
265
+ --max-samples 1000 \
266
+ --shuffle
267
+
268
+ # Real example with UFO dataset 🛸
269
+ hf jobs uv run \
270
+ --flavor a10g-large \
271
+ --secrets HF_TOKEN \
272
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
273
+ davanstrien/ufo-ColPali \
274
+ your-username/ufo-ocr \
275
+ --image-column image \
276
+ --max-model-len 16384 \
277
+ --batch-size 128
278
+
279
+ # Nanonets OCR2 - Next-gen quality with 3B model
280
+ hf jobs uv run \
281
+ --flavor l4x1 \
282
+ --secrets HF_TOKEN \
283
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr2.py \
284
+ your-input-dataset \
285
+ your-output-dataset \
286
+ --batch-size 16
287
+
288
+ # NuMarkdown with reasoning traces for complex documents
289
+ hf jobs uv run \
290
+ --flavor l4x4 \
291
+ --secrets HF_TOKEN \
292
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/numarkdown-ocr.py \
293
+ your-input-dataset your-output-dataset \
294
+ --max-samples 50 \
295
+ --include-thinking \
296
+ --shuffle
297
+
298
+ # olmOCR2 - High-quality OCR with YAML metadata
299
+ hf jobs uv run \
300
+ --flavor a100-large \
301
+ --secrets HF_TOKEN \
302
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/olmocr2-vllm.py \
303
+ your-input-dataset your-output-dataset \
304
+ --batch-size 16 \
305
+ --max-samples 100
306
+
307
+ # Private dataset with custom settings
308
+ hf jobs uv run --flavor l40sx1 \
309
+ --secrets HF_TOKEN \
310
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
311
+ private-input private-output \
312
+ --private \
313
+ --batch-size 32
314
+ ```
315
+
316
+ ### Python API
317
+
318
+ ```python
319
+ from huggingface_hub import run_uv_job
320
+
321
+ job = run_uv_job(
322
+ "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py",
323
+ args=["input-dataset", "output-dataset", "--batch-size", "16"],
324
+ flavor="l4x1"
325
+ )
326
+ ```
327
+
328
+ ### Run Locally (Requires GPU)
329
+
330
+ ```bash
331
+ # Clone and run
332
+ git clone https://huggingface.co/datasets/uv-scripts/ocr
333
+ cd ocr
334
+ uv run nanonets-ocr.py input-dataset output-dataset
335
+
336
+ # Or run directly from URL
337
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
338
+ input-dataset output-dataset
339
+
340
+ # RolmOCR for fast text extraction
341
+ uv run rolm-ocr.py documents extracted-text
342
+ uv run rolm-ocr.py images texts --shuffle --max-samples 100 # Random sample
343
+
344
+ # Nanonets OCR2 for highest quality
345
+ uv run nanonets-ocr2.py documents ocr-results
346
+
347
+ ```
348
+
349
+ ## 📁 Works With
350
+
351
+ Any HuggingFace dataset containing images - documents, forms, receipts, books, handwriting.
352
+
353
+ ## 🎛️ Configuration Options
354
+
355
+ ### Common Options (All Scripts)
356
+
357
+ | Option | Default | Description |
358
+ | -------------------------- | ------- | ----------------------------- |
359
+ | `--image-column` | `image` | Column containing images |
360
+ | `--batch-size` | `32`/`16`* | Images processed together |
361
+ | `--max-model-len` | `8192`/`16384`** | Max context length |
362
+ | `--max-tokens` | `4096`/`8192`** | Max output tokens |
363
+ | `--gpu-memory-utilization` | `0.8` | GPU memory usage (0.0-1.0) |
364
+ | `--split` | `train` | Dataset split to process |
365
+ | `--max-samples` | None | Limit samples (for testing) |
366
+ | `--private` | False | Make output dataset private |
367
+ | `--shuffle` | False | Shuffle dataset before processing |
368
+ | `--seed` | `42` | Random seed for shuffling |
369
+
370
+ *RolmOCR and DoTS use batch size 16
371
+ **RolmOCR uses 16384/8192
372
+
373
+ ### Script-Specific Options
374
+
375
+ **DeepSeek-OCR**:
376
+ - `--resolution-mode`: Quality level - `tiny`, `small`, `base`, `large`, or `gundam` (default)
377
+ - `--prompt-mode`: Task type - `document` (default), `image`, `free`, `figure`, or `describe`
378
+ - `--prompt`: Custom OCR prompt (overrides prompt-mode)
379
+ - `--base-size`, `--image-size`, `--crop-mode`: Override resolution mode manually
380
+ - ⚠️ **Important for HF Jobs**: Add `-e UV_TORCH_BACKEND=auto` for proper PyTorch installation
381
+
382
+ **RolmOCR**:
383
+ - Output column is auto-generated from model name (e.g., `rolmocr_text`)
384
+ - Use `--output-column` to override the default name
385
+
386
+ **DoTS.ocr**:
387
+ - `--prompt-mode`: Choose `ocr` (default), `layout-all`, or `layout-only`
388
+ - `--custom-prompt`: Override with custom prompt text
389
+ - `--output-column`: Output column name (default: `markdown`)
390
+
391
+ 💡 **Performance tip**: Increase batch size for faster processing (e.g., `--batch-size 256` on A100)
deepseek-ocr-vllm.py ADDED
@@ -0,0 +1,692 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub[hf_transfer]",
6
+ # "pillow",
7
+ # "vllm>=0.6.0",
8
+ # "tqdm",
9
+ # "toolz",
10
+ # "torch",
11
+ # ]
12
+ # ///
13
+
14
+ """
15
+ Convert document images to markdown using DeepSeek-OCR with vLLM.
16
+
17
+ This script processes images through the DeepSeek-OCR model to extract
18
+ text and structure as markdown, using vLLM for efficient batch processing.
19
+
20
+ NOTE: Uses vLLM nightly wheels from main (PR #27247 now merged). First run
21
+ may take a few minutes to download and install dependencies.
22
+
23
+ Features:
24
+ - Multiple resolution modes (Tiny/Small/Base/Large/Gundam)
25
+ - LaTeX equation recognition
26
+ - Table extraction and formatting
27
+ - Document structure preservation
28
+ - Image grounding and descriptions
29
+ - Multilingual support
30
+ - Batch processing with vLLM for better performance
31
+ """
32
+
33
+ import argparse
34
+ import base64
35
+ import io
36
+ import json
37
+ import logging
38
+ import os
39
+ import sys
40
+ from typing import Any, Dict, List, Union
41
+ from datetime import datetime
42
+
43
+ import torch
44
+ from datasets import load_dataset
45
+ from huggingface_hub import DatasetCard, login
46
+ from PIL import Image
47
+ from toolz import partition_all
48
+ from tqdm.auto import tqdm
49
+ from vllm import LLM, SamplingParams
50
+
51
+ logging.basicConfig(level=logging.INFO)
52
+ logger = logging.getLogger(__name__)
53
+
54
+ # Resolution mode presets
55
+ RESOLUTION_MODES = {
56
+ "tiny": {"base_size": 512, "image_size": 512, "crop_mode": False},
57
+ "small": {"base_size": 640, "image_size": 640, "crop_mode": False},
58
+ "base": {"base_size": 1024, "image_size": 1024, "crop_mode": False},
59
+ "large": {"base_size": 1280, "image_size": 1280, "crop_mode": False},
60
+ "gundam": {
61
+ "base_size": 1024,
62
+ "image_size": 640,
63
+ "crop_mode": True,
64
+ }, # Dynamic resolution
65
+ }
66
+
67
+ # Prompt mode presets (from DeepSeek-OCR GitHub)
68
+ PROMPT_MODES = {
69
+ "document": "<image>\n<|grounding|>Convert the document to markdown.",
70
+ "image": "<image>\n<|grounding|>OCR this image.",
71
+ "free": "<image>\nFree OCR.",
72
+ "figure": "<image>\nParse the figure.",
73
+ "describe": "<image>\nDescribe this image in detail.",
74
+ }
75
+
76
+
77
+ def check_cuda_availability():
78
+ """Check if CUDA is available and exit if not."""
79
+ if not torch.cuda.is_available():
80
+ logger.error("CUDA is not available. This script requires a GPU.")
81
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
82
+ sys.exit(1)
83
+ else:
84
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
85
+
86
+
87
+ def make_ocr_message(
88
+ image: Union[Image.Image, Dict[str, Any], str],
89
+ prompt: str = "<image>\n<|grounding|>Convert the document to markdown. ",
90
+ ) -> List[Dict]:
91
+ """Create chat message for OCR processing."""
92
+ # Convert to PIL Image if needed
93
+ if isinstance(image, Image.Image):
94
+ pil_img = image
95
+ elif isinstance(image, dict) and "bytes" in image:
96
+ pil_img = Image.open(io.BytesIO(image["bytes"]))
97
+ elif isinstance(image, str):
98
+ pil_img = Image.open(image)
99
+ else:
100
+ raise ValueError(f"Unsupported image type: {type(image)}")
101
+
102
+ # Convert to RGB
103
+ pil_img = pil_img.convert("RGB")
104
+
105
+ # Convert to base64 data URI
106
+ buf = io.BytesIO()
107
+ pil_img.save(buf, format="PNG")
108
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
109
+
110
+ # Return message in vLLM format
111
+ return [
112
+ {
113
+ "role": "user",
114
+ "content": [
115
+ {"type": "image_url", "image_url": {"url": data_uri}},
116
+ {"type": "text", "text": prompt},
117
+ ],
118
+ }
119
+ ]
120
+
121
+
122
+ def create_dataset_card(
123
+ source_dataset: str,
124
+ model: str,
125
+ num_samples: int,
126
+ processing_time: str,
127
+ batch_size: int,
128
+ max_model_len: int,
129
+ max_tokens: int,
130
+ gpu_memory_utilization: float,
131
+ resolution_mode: str,
132
+ base_size: int,
133
+ image_size: int,
134
+ crop_mode: bool,
135
+ image_column: str = "image",
136
+ split: str = "train",
137
+ ) -> str:
138
+ """Create a dataset card documenting the OCR process."""
139
+ model_name = model.split("/")[-1]
140
+
141
+ return f"""---
142
+ tags:
143
+ - ocr
144
+ - document-processing
145
+ - deepseek
146
+ - deepseek-ocr
147
+ - markdown
148
+ - uv-script
149
+ - generated
150
+ ---
151
+
152
+ # Document OCR using {model_name}
153
+
154
+ This dataset contains markdown-formatted OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using DeepSeek-OCR.
155
+
156
+ ## Processing Details
157
+
158
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
159
+ - **Model**: [{model}](https://huggingface.co/{model})
160
+ - **Number of Samples**: {num_samples:,}
161
+ - **Processing Time**: {processing_time}
162
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
163
+
164
+ ### Configuration
165
+
166
+ - **Image Column**: `{image_column}`
167
+ - **Output Column**: `markdown`
168
+ - **Dataset Split**: `{split}`
169
+ - **Batch Size**: {batch_size}
170
+ - **Resolution Mode**: {resolution_mode}
171
+ - **Base Size**: {base_size}
172
+ - **Image Size**: {image_size}
173
+ - **Crop Mode**: {crop_mode}
174
+ - **Max Model Length**: {max_model_len:,} tokens
175
+ - **Max Output Tokens**: {max_tokens:,}
176
+ - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
177
+
178
+ ## Model Information
179
+
180
+ DeepSeek-OCR is a state-of-the-art document OCR model that excels at:
181
+ - 📐 **LaTeX equations** - Mathematical formulas preserved in LaTeX format
182
+ - 📊 **Tables** - Extracted and formatted as HTML/markdown
183
+ - 📝 **Document structure** - Headers, lists, and formatting maintained
184
+ - 🖼️ **Image grounding** - Spatial layout and bounding box information
185
+ - 🔍 **Complex layouts** - Multi-column and hierarchical structures
186
+ - 🌍 **Multilingual** - Supports multiple languages
187
+
188
+ ### Resolution Modes
189
+
190
+ - **Tiny** (512×512): Fast processing, 64 vision tokens
191
+ - **Small** (640×640): Balanced speed/quality, 100 vision tokens
192
+ - **Base** (1024×1024): High quality, 256 vision tokens
193
+ - **Large** (1280×1280): Maximum quality, 400 vision tokens
194
+ - **Gundam** (dynamic): Adaptive multi-tile processing for large documents
195
+
196
+ ## Dataset Structure
197
+
198
+ The dataset contains all original columns plus:
199
+ - `markdown`: The extracted text in markdown format with preserved structure
200
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
201
+
202
+ ## Usage
203
+
204
+ ```python
205
+ from datasets import load_dataset
206
+ import json
207
+
208
+ # Load the dataset
209
+ dataset = load_dataset("{{{{output_dataset_id}}}}", split="{split}")
210
+
211
+ # Access the markdown text
212
+ for example in dataset:
213
+ print(example["markdown"])
214
+ break
215
+
216
+ # View all OCR models applied to this dataset
217
+ inference_info = json.loads(dataset[0]["inference_info"])
218
+ for info in inference_info:
219
+ print(f"Column: {{{{info['column_name']}}}} - Model: {{{{info['model_id']}}}}")
220
+ ```
221
+
222
+ ## Reproduction
223
+
224
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DeepSeek OCR vLLM script:
225
+
226
+ ```bash
227
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \\\\
228
+ {source_dataset} \\\\
229
+ <output-dataset> \\\\
230
+ --resolution-mode {resolution_mode} \\\\
231
+ --image-column {image_column}
232
+ ```
233
+
234
+ ## Performance
235
+
236
+ - **Processing Speed**: ~{num_samples / (float(processing_time.split()[0]) * 60):.1f} images/second
237
+ - **Processing Method**: Batch processing with vLLM (2-3x speedup over sequential)
238
+
239
+ Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
240
+ """
241
+
242
+
243
+ def main(
244
+ input_dataset: str,
245
+ output_dataset: str,
246
+ image_column: str = "image",
247
+ batch_size: int = 8, # Smaller batch size to avoid potential memory issues with DeepSeek-OCR
248
+ model: str = "deepseek-ai/DeepSeek-OCR",
249
+ resolution_mode: str = "gundam",
250
+ base_size: int = None,
251
+ image_size: int = None,
252
+ crop_mode: bool = None,
253
+ max_model_len: int = 8192,
254
+ max_tokens: int = 8192,
255
+ gpu_memory_utilization: float = 0.8,
256
+ prompt_mode: str = "document",
257
+ prompt: str = None,
258
+ hf_token: str = None,
259
+ split: str = "train",
260
+ max_samples: int = None,
261
+ private: bool = False,
262
+ shuffle: bool = False,
263
+ seed: int = 42,
264
+ ):
265
+ """Process images from HF dataset through DeepSeek-OCR model with vLLM."""
266
+
267
+ # Check CUDA availability first
268
+ check_cuda_availability()
269
+
270
+ # Track processing start time
271
+ start_time = datetime.now()
272
+
273
+ # Enable HF_TRANSFER for faster downloads
274
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
275
+
276
+ # Login to HF if token provided
277
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
278
+ if HF_TOKEN:
279
+ login(token=HF_TOKEN)
280
+
281
+ # Determine resolution settings
282
+ if resolution_mode in RESOLUTION_MODES:
283
+ mode_config = RESOLUTION_MODES[resolution_mode]
284
+ final_base_size = (
285
+ base_size if base_size is not None else mode_config["base_size"]
286
+ )
287
+ final_image_size = (
288
+ image_size if image_size is not None else mode_config["image_size"]
289
+ )
290
+ final_crop_mode = (
291
+ crop_mode if crop_mode is not None else mode_config["crop_mode"]
292
+ )
293
+ logger.info(f"Using resolution mode: {resolution_mode}")
294
+ else:
295
+ # Custom mode - require all parameters
296
+ if base_size is None or image_size is None or crop_mode is None:
297
+ raise ValueError(
298
+ f"Invalid resolution mode '{resolution_mode}'. "
299
+ f"Use one of {list(RESOLUTION_MODES.keys())} or specify "
300
+ f"--base-size, --image-size, and --crop-mode manually."
301
+ )
302
+ final_base_size = base_size
303
+ final_image_size = image_size
304
+ final_crop_mode = crop_mode
305
+ resolution_mode = "custom"
306
+
307
+ logger.info(
308
+ f"Resolution: base_size={final_base_size}, "
309
+ f"image_size={final_image_size}, crop_mode={final_crop_mode}"
310
+ )
311
+
312
+ # Determine prompt
313
+ if prompt is not None:
314
+ final_prompt = prompt
315
+ logger.info(f"Using custom prompt")
316
+ elif prompt_mode in PROMPT_MODES:
317
+ final_prompt = PROMPT_MODES[prompt_mode]
318
+ logger.info(f"Using prompt mode: {prompt_mode}")
319
+ else:
320
+ raise ValueError(
321
+ f"Invalid prompt mode '{prompt_mode}'. "
322
+ f"Use one of {list(PROMPT_MODES.keys())} or specify --prompt"
323
+ )
324
+
325
+ logger.info(f"Prompt: {final_prompt}")
326
+
327
+ # Load dataset
328
+ logger.info(f"Loading dataset: {input_dataset}")
329
+ dataset = load_dataset(input_dataset, split=split)
330
+
331
+ # Validate image column
332
+ if image_column not in dataset.column_names:
333
+ raise ValueError(
334
+ f"Column '{image_column}' not found. Available: {dataset.column_names}"
335
+ )
336
+
337
+ # Shuffle if requested
338
+ if shuffle:
339
+ logger.info(f"Shuffling dataset with seed {seed}")
340
+ dataset = dataset.shuffle(seed=seed)
341
+
342
+ # Limit samples if requested
343
+ if max_samples:
344
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
345
+ logger.info(f"Limited to {len(dataset)} samples")
346
+
347
+ # Initialize vLLM
348
+ logger.info(f"Initializing vLLM with model: {model}")
349
+ logger.info("This may take a few minutes on first run...")
350
+
351
+ # Add specific parameters for DeepSeek-OCR compatibility
352
+ llm = LLM(
353
+ model=model,
354
+ trust_remote_code=True,
355
+ max_model_len=max_model_len,
356
+ gpu_memory_utilization=gpu_memory_utilization,
357
+ limit_mm_per_prompt={"image": 1},
358
+ enforce_eager=False, # Use torch.compile instead of eager execution
359
+ )
360
+
361
+ sampling_params = SamplingParams(
362
+ temperature=0.0, # Deterministic for OCR
363
+ max_tokens=max_tokens,
364
+ )
365
+
366
+ logger.info(f"Processing {len(dataset)} images in batches of {batch_size}")
367
+ logger.info(
368
+ "Using vLLM for batch processing - should be faster than sequential processing"
369
+ )
370
+
371
+ # Process images in batches
372
+ all_markdown = []
373
+
374
+ for batch_indices in tqdm(
375
+ partition_all(batch_size, range(len(dataset))),
376
+ total=(len(dataset) + batch_size - 1) // batch_size,
377
+ desc="DeepSeek-OCR vLLM processing",
378
+ ):
379
+ batch_indices = list(batch_indices)
380
+ batch_images = [dataset[i][image_column] for i in batch_indices]
381
+
382
+ try:
383
+ # Create messages for batch
384
+ batch_messages = [make_ocr_message(img, final_prompt) for img in batch_images]
385
+
386
+ # Process with vLLM
387
+ outputs = llm.chat(batch_messages, sampling_params)
388
+
389
+ # Extract outputs
390
+ for output in outputs:
391
+ text = output.outputs[0].text.strip()
392
+ all_markdown.append(text)
393
+
394
+ except Exception as e:
395
+ logger.error(f"Error processing batch: {e}")
396
+ # Add error placeholders for failed batch
397
+ all_markdown.extend(["[OCR FAILED]"] * len(batch_images))
398
+
399
+ # Calculate processing time
400
+ processing_duration = datetime.now() - start_time
401
+ processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
402
+
403
+ # Add markdown column to dataset
404
+ logger.info("Adding markdown column to dataset")
405
+ dataset = dataset.add_column("markdown", all_markdown)
406
+
407
+ # Handle inference_info tracking
408
+ logger.info("Updating inference_info...")
409
+
410
+ # Check for existing inference_info
411
+ if "inference_info" in dataset.column_names:
412
+ # Parse existing info from first row (all rows have same info)
413
+ try:
414
+ existing_info = json.loads(dataset[0]["inference_info"])
415
+ if not isinstance(existing_info, list):
416
+ existing_info = [existing_info] # Convert old format to list
417
+ except (json.JSONDecodeError, TypeError):
418
+ existing_info = []
419
+ # Remove old column to update it
420
+ dataset = dataset.remove_columns(["inference_info"])
421
+ else:
422
+ existing_info = []
423
+
424
+ # Add new inference info
425
+ new_info = {
426
+ "column_name": "markdown",
427
+ "model_id": model,
428
+ "processing_date": datetime.now().isoformat(),
429
+ "resolution_mode": resolution_mode,
430
+ "base_size": final_base_size,
431
+ "image_size": final_image_size,
432
+ "crop_mode": final_crop_mode,
433
+ "prompt": final_prompt,
434
+ "prompt_mode": prompt_mode if prompt is None else "custom",
435
+ "batch_size": batch_size,
436
+ "max_tokens": max_tokens,
437
+ "gpu_memory_utilization": gpu_memory_utilization,
438
+ "max_model_len": max_model_len,
439
+ "script": "deepseek-ocr-vllm.py",
440
+ "script_version": "1.0.0",
441
+ "script_url": "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py",
442
+ "implementation": "vllm (batch processing)",
443
+ }
444
+ existing_info.append(new_info)
445
+
446
+ # Add updated inference_info column
447
+ info_json = json.dumps(existing_info, ensure_ascii=False)
448
+ dataset = dataset.add_column("inference_info", [info_json] * len(dataset))
449
+
450
+ # Push to hub
451
+ logger.info(f"Pushing to {output_dataset}")
452
+ dataset.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
453
+
454
+ # Create and push dataset card
455
+ logger.info("Creating dataset card...")
456
+ card_content = create_dataset_card(
457
+ source_dataset=input_dataset,
458
+ model=model,
459
+ num_samples=len(dataset),
460
+ processing_time=processing_time_str,
461
+ batch_size=batch_size,
462
+ max_model_len=max_model_len,
463
+ max_tokens=max_tokens,
464
+ gpu_memory_utilization=gpu_memory_utilization,
465
+ resolution_mode=resolution_mode,
466
+ base_size=final_base_size,
467
+ image_size=final_image_size,
468
+ crop_mode=final_crop_mode,
469
+ image_column=image_column,
470
+ split=split,
471
+ )
472
+
473
+ card = DatasetCard(card_content)
474
+ card.push_to_hub(output_dataset, token=HF_TOKEN)
475
+ logger.info("✅ Dataset card created and pushed!")
476
+
477
+ logger.info("✅ OCR conversion complete!")
478
+ logger.info(
479
+ f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
480
+ )
481
+ logger.info(f"Processing time: {processing_time_str}")
482
+
483
+
484
+ if __name__ == "__main__":
485
+ # Show example usage if no arguments
486
+ if len(sys.argv) == 1:
487
+ print("=" * 80)
488
+ print("DeepSeek-OCR to Markdown Converter (vLLM)")
489
+ print("=" * 80)
490
+ print("\nThis script converts document images to markdown using")
491
+ print("DeepSeek-OCR with vLLM for efficient batch processing.")
492
+ print("\nFeatures:")
493
+ print("- Multiple resolution modes (Tiny/Small/Base/Large/Gundam)")
494
+ print("- LaTeX equation recognition")
495
+ print("- Table extraction and formatting")
496
+ print("- Document structure preservation")
497
+ print("- Image grounding and spatial layout")
498
+ print("- Multilingual support")
499
+ print("- ⚡ Fast batch processing with vLLM (2-3x speedup)")
500
+ print("\nExample usage:")
501
+ print("\n1. Basic OCR conversion (Gundam mode - dynamic resolution):")
502
+ print(" uv run deepseek-ocr-vllm.py document-images markdown-docs")
503
+ print("\n2. High quality mode (Large - 1280×1280):")
504
+ print(
505
+ " uv run deepseek-ocr-vllm.py scanned-pdfs extracted-text --resolution-mode large"
506
+ )
507
+ print("\n3. Fast processing (Tiny - 512×512):")
508
+ print(" uv run deepseek-ocr-vllm.py quick-test output --resolution-mode tiny")
509
+ print("\n4. Parse figures from documents:")
510
+ print(" uv run deepseek-ocr-vllm.py scientific-papers figures --prompt-mode figure")
511
+ print("\n5. Free OCR without layout:")
512
+ print(" uv run deepseek-ocr-vllm.py images text --prompt-mode free")
513
+ print("\n6. Process a subset for testing:")
514
+ print(
515
+ " uv run deepseek-ocr-vllm.py large-dataset test-output --max-samples 10"
516
+ )
517
+ print("\n7. Custom resolution:")
518
+ print(" uv run deepseek-ocr-vllm.py dataset output \\")
519
+ print(" --base-size 1024 --image-size 640 --crop-mode")
520
+ print("\n8. Running on HF Jobs:")
521
+ print(" hf jobs uv run --flavor l4x1 \\")
522
+ print(" -s HF_TOKEN \\")
523
+ print(" -e UV_TORCH_BACKEND=auto \\")
524
+ print(
525
+ " https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \\"
526
+ )
527
+ print(" your-document-dataset \\")
528
+ print(" your-markdown-output")
529
+ print("\n" + "=" * 80)
530
+ print("\nFor full help, run: uv run deepseek-ocr-vllm.py --help")
531
+ sys.exit(0)
532
+
533
+ parser = argparse.ArgumentParser(
534
+ description="OCR images to markdown using DeepSeek-OCR (vLLM)",
535
+ formatter_class=argparse.RawDescriptionHelpFormatter,
536
+ epilog="""
537
+ Resolution Modes:
538
+ tiny 512×512 pixels, fast processing (64 vision tokens)
539
+ small 640×640 pixels, balanced (100 vision tokens)
540
+ base 1024×1024 pixels, high quality (256 vision tokens)
541
+ large 1280×1280 pixels, maximum quality (400 vision tokens)
542
+ gundam Dynamic multi-tile processing (adaptive)
543
+
544
+ Prompt Modes:
545
+ document Convert document to markdown with grounding (default)
546
+ image OCR any image with grounding
547
+ free Free OCR without layout preservation
548
+ figure Parse figures from documents
549
+ describe Generate detailed image descriptions
550
+
551
+ Examples:
552
+ # Basic usage with default Gundam mode
553
+ uv run deepseek-ocr-vllm.py my-images-dataset ocr-results
554
+
555
+ # High quality processing
556
+ uv run deepseek-ocr-vllm.py documents extracted-text --resolution-mode large
557
+
558
+ # Fast processing for testing
559
+ uv run deepseek-ocr-vllm.py dataset output --resolution-mode tiny --max-samples 100
560
+
561
+ # Parse figures from a document dataset
562
+ uv run deepseek-ocr-vllm.py scientific-papers figures --prompt-mode figure
563
+
564
+ # Free OCR without layout (fastest)
565
+ uv run deepseek-ocr-vllm.py images text --prompt-mode free
566
+
567
+ # Custom prompt for specific task
568
+ uv run deepseek-ocr-vllm.py dataset output --prompt "<image>\nExtract all table data."
569
+
570
+ # Custom resolution settings
571
+ uv run deepseek-ocr-vllm.py dataset output --base-size 1024 --image-size 640 --crop-mode
572
+
573
+ # With custom batch size for performance tuning
574
+ uv run deepseek-ocr-vllm.py dataset output --batch-size 16 --max-model-len 16384
575
+ """,
576
+ )
577
+
578
+ parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
579
+ parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
580
+ parser.add_argument(
581
+ "--image-column",
582
+ default="image",
583
+ help="Column containing images (default: image)",
584
+ )
585
+ parser.add_argument(
586
+ "--batch-size",
587
+ type=int,
588
+ default=8,
589
+ help="Batch size for processing (default: 8, adjust based on GPU memory)",
590
+ )
591
+ parser.add_argument(
592
+ "--model",
593
+ default="deepseek-ai/DeepSeek-OCR",
594
+ help="Model to use (default: deepseek-ai/DeepSeek-OCR)",
595
+ )
596
+ parser.add_argument(
597
+ "--resolution-mode",
598
+ default="gundam",
599
+ choices=list(RESOLUTION_MODES.keys()) + ["custom"],
600
+ help="Resolution mode preset (default: gundam)",
601
+ )
602
+ parser.add_argument(
603
+ "--base-size",
604
+ type=int,
605
+ help="Base resolution size (overrides resolution-mode)",
606
+ )
607
+ parser.add_argument(
608
+ "--image-size",
609
+ type=int,
610
+ help="Image tile size (overrides resolution-mode)",
611
+ )
612
+ parser.add_argument(
613
+ "--crop-mode",
614
+ action="store_true",
615
+ help="Enable dynamic multi-tile cropping (overrides resolution-mode)",
616
+ )
617
+ parser.add_argument(
618
+ "--max-model-len",
619
+ type=int,
620
+ default=8192,
621
+ help="Maximum model context length (default: 8192)",
622
+ )
623
+ parser.add_argument(
624
+ "--max-tokens",
625
+ type=int,
626
+ default=8192,
627
+ help="Maximum tokens to generate (default: 8192)",
628
+ )
629
+ parser.add_argument(
630
+ "--gpu-memory-utilization",
631
+ type=float,
632
+ default=0.8,
633
+ help="GPU memory utilization (default: 0.8)",
634
+ )
635
+ parser.add_argument(
636
+ "--prompt-mode",
637
+ default="document",
638
+ choices=list(PROMPT_MODES.keys()),
639
+ help="Prompt mode preset (default: document). Use --prompt for custom prompts.",
640
+ )
641
+ parser.add_argument(
642
+ "--prompt",
643
+ help="Custom OCR prompt (overrides --prompt-mode)",
644
+ )
645
+ parser.add_argument("--hf-token", help="Hugging Face API token")
646
+ parser.add_argument(
647
+ "--split", default="train", help="Dataset split to use (default: train)"
648
+ )
649
+ parser.add_argument(
650
+ "--max-samples",
651
+ type=int,
652
+ help="Maximum number of samples to process (for testing)",
653
+ )
654
+ parser.add_argument(
655
+ "--private", action="store_true", help="Make output dataset private"
656
+ )
657
+ parser.add_argument(
658
+ "--shuffle",
659
+ action="store_true",
660
+ help="Shuffle the dataset before processing (useful for random sampling)",
661
+ )
662
+ parser.add_argument(
663
+ "--seed",
664
+ type=int,
665
+ default=42,
666
+ help="Random seed for shuffling (default: 42)",
667
+ )
668
+
669
+ args = parser.parse_args()
670
+
671
+ main(
672
+ input_dataset=args.input_dataset,
673
+ output_dataset=args.output_dataset,
674
+ image_column=args.image_column,
675
+ batch_size=args.batch_size,
676
+ model=args.model,
677
+ resolution_mode=args.resolution_mode,
678
+ base_size=args.base_size,
679
+ image_size=args.image_size,
680
+ crop_mode=args.crop_mode if args.crop_mode else None,
681
+ max_model_len=args.max_model_len,
682
+ max_tokens=args.max_tokens,
683
+ gpu_memory_utilization=args.gpu_memory_utilization,
684
+ prompt_mode=args.prompt_mode,
685
+ prompt=args.prompt,
686
+ hf_token=args.hf_token,
687
+ split=args.split,
688
+ max_samples=args.max_samples,
689
+ private=args.private,
690
+ shuffle=args.shuffle,
691
+ seed=args.seed,
692
+ )
deepseek-ocr.py ADDED
@@ -0,0 +1,604 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub[hf_transfer]",
6
+ # "pillow",
7
+ # "torch",
8
+ # "torchvision",
9
+ # "transformers==4.46.3",
10
+ # "tokenizers==0.20.3",
11
+ # "tqdm",
12
+ # "addict",
13
+ # "matplotlib",
14
+ # "einops",
15
+ # "easydict",
16
+ # ]
17
+ #
18
+ # ///
19
+
20
+ """
21
+ Convert document images to markdown using DeepSeek-OCR with Transformers.
22
+
23
+ This script processes images through the DeepSeek-OCR model to extract
24
+ text and structure as markdown, using the official Transformers API.
25
+
26
+ Features:
27
+ - Multiple resolution modes (Tiny/Small/Base/Large/Gundam)
28
+ - LaTeX equation recognition
29
+ - Table extraction and formatting
30
+ - Document structure preservation
31
+ - Image grounding and descriptions
32
+ - Multilingual support
33
+
34
+ Note: This script processes images sequentially (no batching) using the
35
+ official transformers API. It's slower than vLLM-based scripts but uses
36
+ the well-supported official implementation.
37
+ """
38
+
39
+ import argparse
40
+ import json
41
+ import logging
42
+ import os
43
+ import shutil
44
+ import sys
45
+ from datetime import datetime
46
+ from pathlib import Path
47
+ from typing import Optional
48
+
49
+ import torch
50
+ from datasets import load_dataset
51
+ from huggingface_hub import DatasetCard, login
52
+ from PIL import Image
53
+ from tqdm.auto import tqdm
54
+ from transformers import AutoModel, AutoTokenizer
55
+
56
+ logging.basicConfig(level=logging.INFO)
57
+ logger = logging.getLogger(__name__)
58
+
59
+ # Resolution mode presets
60
+ RESOLUTION_MODES = {
61
+ "tiny": {"base_size": 512, "image_size": 512, "crop_mode": False},
62
+ "small": {"base_size": 640, "image_size": 640, "crop_mode": False},
63
+ "base": {"base_size": 1024, "image_size": 1024, "crop_mode": False},
64
+ "large": {"base_size": 1280, "image_size": 1280, "crop_mode": False},
65
+ "gundam": {"base_size": 1024, "image_size": 640, "crop_mode": True}, # Dynamic resolution
66
+ }
67
+
68
+
69
+ def check_cuda_availability():
70
+ """Check if CUDA is available and exit if not."""
71
+ if not torch.cuda.is_available():
72
+ logger.error("CUDA is not available. This script requires a GPU.")
73
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
74
+ sys.exit(1)
75
+ else:
76
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
77
+
78
+
79
+ def create_dataset_card(
80
+ source_dataset: str,
81
+ model: str,
82
+ num_samples: int,
83
+ processing_time: str,
84
+ resolution_mode: str,
85
+ base_size: int,
86
+ image_size: int,
87
+ crop_mode: bool,
88
+ image_column: str = "image",
89
+ split: str = "train",
90
+ ) -> str:
91
+ """Create a dataset card documenting the OCR process."""
92
+ model_name = model.split("/")[-1]
93
+
94
+ return f"""---
95
+ tags:
96
+ - ocr
97
+ - document-processing
98
+ - deepseek
99
+ - deepseek-ocr
100
+ - markdown
101
+ - uv-script
102
+ - generated
103
+ ---
104
+
105
+ # Document OCR using {model_name}
106
+
107
+ This dataset contains markdown-formatted OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using DeepSeek-OCR.
108
+
109
+ ## Processing Details
110
+
111
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
112
+ - **Model**: [{model}](https://huggingface.co/{model})
113
+ - **Number of Samples**: {num_samples:,}
114
+ - **Processing Time**: {processing_time}
115
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
116
+
117
+ ### Configuration
118
+
119
+ - **Image Column**: `{image_column}`
120
+ - **Output Column**: `markdown`
121
+ - **Dataset Split**: `{split}`
122
+ - **Resolution Mode**: {resolution_mode}
123
+ - **Base Size**: {base_size}
124
+ - **Image Size**: {image_size}
125
+ - **Crop Mode**: {crop_mode}
126
+
127
+ ## Model Information
128
+
129
+ DeepSeek-OCR is a state-of-the-art document OCR model that excels at:
130
+ - 📐 **LaTeX equations** - Mathematical formulas preserved in LaTeX format
131
+ - 📊 **Tables** - Extracted and formatted as HTML/markdown
132
+ - 📝 **Document structure** - Headers, lists, and formatting maintained
133
+ - 🖼️ **Image grounding** - Spatial layout and bounding box information
134
+ - 🔍 **Complex layouts** - Multi-column and hierarchical structures
135
+ - 🌍 **Multilingual** - Supports multiple languages
136
+
137
+ ### Resolution Modes
138
+
139
+ - **Tiny** (512×512): Fast processing, 64 vision tokens
140
+ - **Small** (640×640): Balanced speed/quality, 100 vision tokens
141
+ - **Base** (1024×1024): High quality, 256 vision tokens
142
+ - **Large** (1280×1280): Maximum quality, 400 vision tokens
143
+ - **Gundam** (dynamic): Adaptive multi-tile processing for large documents
144
+
145
+ ## Dataset Structure
146
+
147
+ The dataset contains all original columns plus:
148
+ - `markdown`: The extracted text in markdown format with preserved structure
149
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
150
+
151
+ ## Usage
152
+
153
+ ```python
154
+ from datasets import load_dataset
155
+ import json
156
+
157
+ # Load the dataset
158
+ dataset = load_dataset("{{{{output_dataset_id}}}}", split="{split}")
159
+
160
+ # Access the markdown text
161
+ for example in dataset:
162
+ print(example["markdown"])
163
+ break
164
+
165
+ # View all OCR models applied to this dataset
166
+ inference_info = json.loads(dataset[0]["inference_info"])
167
+ for info in inference_info:
168
+ print(f"Column: {{{{info['column_name']}}}} - Model: {{{{info['model_id']}}}}")
169
+ ```
170
+
171
+ ## Reproduction
172
+
173
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DeepSeek OCR script:
174
+
175
+ ```bash
176
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr.py \\
177
+ {source_dataset} \\
178
+ <output-dataset> \\
179
+ --resolution-mode {resolution_mode} \\
180
+ --image-column {image_column}
181
+ ```
182
+
183
+ ## Performance
184
+
185
+ - **Processing Speed**: ~{num_samples / (float(processing_time.split()[0]) * 60):.1f} images/second
186
+ - **Processing Method**: Sequential (Transformers API, no batching)
187
+
188
+ Note: This uses the official Transformers implementation. For faster batch processing,
189
+ consider using the vLLM version once DeepSeek-OCR is officially supported by vLLM.
190
+
191
+ Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
192
+ """
193
+
194
+
195
+ def process_single_image(
196
+ model,
197
+ tokenizer,
198
+ image: Image.Image,
199
+ prompt: str,
200
+ base_size: int,
201
+ image_size: int,
202
+ crop_mode: bool,
203
+ temp_image_path: str,
204
+ temp_output_dir: str,
205
+ ) -> str:
206
+ """Process a single image through DeepSeek-OCR."""
207
+ # Convert to RGB if needed
208
+ if image.mode != "RGB":
209
+ image = image.convert("RGB")
210
+
211
+ # Save to temp file (model.infer expects a file path)
212
+ image.save(temp_image_path, format="PNG")
213
+
214
+ # Run inference
215
+ result = model.infer(
216
+ tokenizer,
217
+ prompt=prompt,
218
+ image_file=temp_image_path,
219
+ output_path=temp_output_dir, # Need real directory path
220
+ base_size=base_size,
221
+ image_size=image_size,
222
+ crop_mode=crop_mode,
223
+ save_results=False,
224
+ test_compress=False,
225
+ )
226
+
227
+ return result if isinstance(result, str) else str(result)
228
+
229
+
230
+ def main(
231
+ input_dataset: str,
232
+ output_dataset: str,
233
+ image_column: str = "image",
234
+ model: str = "deepseek-ai/DeepSeek-OCR",
235
+ resolution_mode: str = "gundam",
236
+ base_size: Optional[int] = None,
237
+ image_size: Optional[int] = None,
238
+ crop_mode: Optional[bool] = None,
239
+ prompt: str = "<image>\n<|grounding|>Convert the document to markdown.",
240
+ hf_token: str = None,
241
+ split: str = "train",
242
+ max_samples: int = None,
243
+ private: bool = False,
244
+ shuffle: bool = False,
245
+ seed: int = 42,
246
+ ):
247
+ """Process images from HF dataset through DeepSeek-OCR model."""
248
+
249
+ # Check CUDA availability first
250
+ check_cuda_availability()
251
+
252
+ # Track processing start time
253
+ start_time = datetime.now()
254
+
255
+ # Enable HF_TRANSFER for faster downloads
256
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
257
+
258
+ # Login to HF if token provided
259
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
260
+ if HF_TOKEN:
261
+ login(token=HF_TOKEN)
262
+
263
+ # Determine resolution settings
264
+ if resolution_mode in RESOLUTION_MODES:
265
+ mode_config = RESOLUTION_MODES[resolution_mode]
266
+ final_base_size = base_size if base_size is not None else mode_config["base_size"]
267
+ final_image_size = image_size if image_size is not None else mode_config["image_size"]
268
+ final_crop_mode = crop_mode if crop_mode is not None else mode_config["crop_mode"]
269
+ logger.info(f"Using resolution mode: {resolution_mode}")
270
+ else:
271
+ # Custom mode - require all parameters
272
+ if base_size is None or image_size is None or crop_mode is None:
273
+ raise ValueError(
274
+ f"Invalid resolution mode '{resolution_mode}'. "
275
+ f"Use one of {list(RESOLUTION_MODES.keys())} or specify "
276
+ f"--base-size, --image-size, and --crop-mode manually."
277
+ )
278
+ final_base_size = base_size
279
+ final_image_size = image_size
280
+ final_crop_mode = crop_mode
281
+ resolution_mode = "custom"
282
+
283
+ logger.info(
284
+ f"Resolution: base_size={final_base_size}, "
285
+ f"image_size={final_image_size}, crop_mode={final_crop_mode}"
286
+ )
287
+
288
+ # Load dataset
289
+ logger.info(f"Loading dataset: {input_dataset}")
290
+ dataset = load_dataset(input_dataset, split=split)
291
+
292
+ # Validate image column
293
+ if image_column not in dataset.column_names:
294
+ raise ValueError(
295
+ f"Column '{image_column}' not found. Available: {dataset.column_names}"
296
+ )
297
+
298
+ # Shuffle if requested
299
+ if shuffle:
300
+ logger.info(f"Shuffling dataset with seed {seed}")
301
+ dataset = dataset.shuffle(seed=seed)
302
+
303
+ # Limit samples if requested
304
+ if max_samples:
305
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
306
+ logger.info(f"Limited to {len(dataset)} samples")
307
+
308
+ # Initialize model
309
+ logger.info(f"Loading model: {model}")
310
+ tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code=True)
311
+
312
+ try:
313
+ model_obj = AutoModel.from_pretrained(
314
+ model,
315
+ _attn_implementation="flash_attention_2",
316
+ trust_remote_code=True,
317
+ use_safetensors=True,
318
+ )
319
+ except Exception as e:
320
+ logger.warning(f"Failed to load with flash_attention_2: {e}")
321
+ logger.info("Falling back to standard attention...")
322
+ model_obj = AutoModel.from_pretrained(
323
+ model,
324
+ trust_remote_code=True,
325
+ use_safetensors=True,
326
+ )
327
+
328
+ model_obj = model_obj.eval().cuda().to(torch.bfloat16)
329
+ logger.info("Model loaded successfully")
330
+
331
+ # Process images sequentially
332
+ all_markdown = []
333
+
334
+ logger.info(f"Processing {len(dataset)} images (sequential, no batching)")
335
+ logger.info("Note: This may be slower than vLLM-based scripts")
336
+
337
+ # Create temp directories for image files and output (simple local dirs)
338
+ temp_dir = Path("temp_images")
339
+ temp_dir.mkdir(exist_ok=True)
340
+ temp_image_path = str(temp_dir / "temp_image.png")
341
+
342
+ temp_output_dir = Path("temp_output")
343
+ temp_output_dir.mkdir(exist_ok=True)
344
+
345
+ try:
346
+ for i in tqdm(range(len(dataset)), desc="OCR processing"):
347
+ try:
348
+ image = dataset[i][image_column]
349
+
350
+ # Handle different image formats
351
+ if isinstance(image, dict) and "bytes" in image:
352
+ from io import BytesIO
353
+ image = Image.open(BytesIO(image["bytes"]))
354
+ elif isinstance(image, str):
355
+ image = Image.open(image)
356
+ elif not isinstance(image, Image.Image):
357
+ raise ValueError(f"Unsupported image type: {type(image)}")
358
+
359
+ # Process image
360
+ result = process_single_image(
361
+ model_obj,
362
+ tokenizer,
363
+ image,
364
+ prompt,
365
+ final_base_size,
366
+ final_image_size,
367
+ final_crop_mode,
368
+ temp_image_path,
369
+ str(temp_output_dir),
370
+ )
371
+
372
+ all_markdown.append(result)
373
+
374
+ except Exception as e:
375
+ logger.error(f"Error processing image {i}: {e}")
376
+ all_markdown.append("[OCR FAILED]")
377
+
378
+ finally:
379
+ # Clean up temp directories
380
+ try:
381
+ shutil.rmtree(temp_dir)
382
+ shutil.rmtree(temp_output_dir)
383
+ except:
384
+ pass
385
+
386
+ # Add markdown column to dataset
387
+ logger.info("Adding markdown column to dataset")
388
+ dataset = dataset.add_column("markdown", all_markdown)
389
+
390
+ # Handle inference_info tracking
391
+ logger.info("Updating inference_info...")
392
+
393
+ # Check for existing inference_info
394
+ if "inference_info" in dataset.column_names:
395
+ try:
396
+ existing_info = json.loads(dataset[0]["inference_info"])
397
+ if not isinstance(existing_info, list):
398
+ existing_info = [existing_info]
399
+ except (json.JSONDecodeError, TypeError):
400
+ existing_info = []
401
+ dataset = dataset.remove_columns(["inference_info"])
402
+ else:
403
+ existing_info = []
404
+
405
+ # Add new inference info
406
+ new_info = {
407
+ "column_name": "markdown",
408
+ "model_id": model,
409
+ "processing_date": datetime.now().isoformat(),
410
+ "resolution_mode": resolution_mode,
411
+ "base_size": final_base_size,
412
+ "image_size": final_image_size,
413
+ "crop_mode": final_crop_mode,
414
+ "prompt": prompt,
415
+ "script": "deepseek-ocr.py",
416
+ "script_version": "1.0.0",
417
+ "script_url": "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr.py",
418
+ "implementation": "transformers (sequential)",
419
+ }
420
+ existing_info.append(new_info)
421
+
422
+ # Add updated inference_info column
423
+ info_json = json.dumps(existing_info, ensure_ascii=False)
424
+ dataset = dataset.add_column("inference_info", [info_json] * len(dataset))
425
+
426
+ # Push to hub
427
+ logger.info(f"Pushing to {output_dataset}")
428
+ dataset.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
429
+
430
+ # Calculate processing time
431
+ end_time = datetime.now()
432
+ processing_duration = end_time - start_time
433
+ processing_time = f"{processing_duration.total_seconds() / 60:.1f} minutes"
434
+
435
+ # Create and push dataset card
436
+ logger.info("Creating dataset card...")
437
+ card_content = create_dataset_card(
438
+ source_dataset=input_dataset,
439
+ model=model,
440
+ num_samples=len(dataset),
441
+ processing_time=processing_time,
442
+ resolution_mode=resolution_mode,
443
+ base_size=final_base_size,
444
+ image_size=final_image_size,
445
+ crop_mode=final_crop_mode,
446
+ image_column=image_column,
447
+ split=split,
448
+ )
449
+
450
+ card = DatasetCard(card_content)
451
+ card.push_to_hub(output_dataset, token=HF_TOKEN)
452
+ logger.info("✅ Dataset card created and pushed!")
453
+
454
+ logger.info("✅ OCR conversion complete!")
455
+ logger.info(
456
+ f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
457
+ )
458
+
459
+
460
+ if __name__ == "__main__":
461
+ # Show example usage if no arguments
462
+ if len(sys.argv) == 1:
463
+ print("=" * 80)
464
+ print("DeepSeek-OCR to Markdown Converter (Transformers)")
465
+ print("=" * 80)
466
+ print("\nThis script converts document images to markdown using")
467
+ print("DeepSeek-OCR with the official Transformers API.")
468
+ print("\nFeatures:")
469
+ print("- Multiple resolution modes (Tiny/Small/Base/Large/Gundam)")
470
+ print("- LaTeX equation recognition")
471
+ print("- Table extraction and formatting")
472
+ print("- Document structure preservation")
473
+ print("- Image grounding and spatial layout")
474
+ print("- Multilingual support")
475
+ print("\nNote: Sequential processing (no batching). Slower than vLLM scripts.")
476
+ print("\nExample usage:")
477
+ print("\n1. Basic OCR conversion (Gundam mode - dynamic resolution):")
478
+ print(" uv run deepseek-ocr.py document-images markdown-docs")
479
+ print("\n2. High quality mode (Large - 1280×1280):")
480
+ print(" uv run deepseek-ocr.py scanned-pdfs extracted-text --resolution-mode large")
481
+ print("\n3. Fast processing (Tiny - 512×512):")
482
+ print(" uv run deepseek-ocr.py quick-test output --resolution-mode tiny")
483
+ print("\n4. Process a subset for testing:")
484
+ print(" uv run deepseek-ocr.py large-dataset test-output --max-samples 10")
485
+ print("\n5. Custom resolution:")
486
+ print(" uv run deepseek-ocr.py dataset output \\")
487
+ print(" --base-size 1024 --image-size 640 --crop-mode")
488
+ print("\n6. Running on HF Jobs:")
489
+ print(" hf jobs uv run --flavor l4x1 \\")
490
+ print(' --secrets HF_TOKEN \\')
491
+ print(" https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr.py \\")
492
+ print(" your-document-dataset \\")
493
+ print(" your-markdown-output")
494
+ print("\n" + "=" * 80)
495
+ print("\nFor full help, run: uv run deepseek-ocr.py --help")
496
+ sys.exit(0)
497
+
498
+ parser = argparse.ArgumentParser(
499
+ description="OCR images to markdown using DeepSeek-OCR (Transformers)",
500
+ formatter_class=argparse.RawDescriptionHelpFormatter,
501
+ epilog="""
502
+ Resolution Modes:
503
+ tiny 512×512 pixels, fast processing (64 vision tokens)
504
+ small 640×640 pixels, balanced (100 vision tokens)
505
+ base 1024×1024 pixels, high quality (256 vision tokens)
506
+ large 1280×1280 pixels, maximum quality (400 vision tokens)
507
+ gundam Dynamic multi-tile processing (adaptive)
508
+
509
+ Examples:
510
+ # Basic usage with default Gundam mode
511
+ uv run deepseek-ocr.py my-images-dataset ocr-results
512
+
513
+ # High quality processing
514
+ uv run deepseek-ocr.py documents extracted-text --resolution-mode large
515
+
516
+ # Fast processing for testing
517
+ uv run deepseek-ocr.py dataset output --resolution-mode tiny --max-samples 100
518
+
519
+ # Custom resolution settings
520
+ uv run deepseek-ocr.py dataset output --base-size 1024 --image-size 640 --crop-mode
521
+ """,
522
+ )
523
+
524
+ parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
525
+ parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
526
+ parser.add_argument(
527
+ "--image-column",
528
+ default="image",
529
+ help="Column containing images (default: image)",
530
+ )
531
+ parser.add_argument(
532
+ "--model",
533
+ default="deepseek-ai/DeepSeek-OCR",
534
+ help="Model to use (default: deepseek-ai/DeepSeek-OCR)",
535
+ )
536
+ parser.add_argument(
537
+ "--resolution-mode",
538
+ default="gundam",
539
+ choices=list(RESOLUTION_MODES.keys()) + ["custom"],
540
+ help="Resolution mode preset (default: gundam)",
541
+ )
542
+ parser.add_argument(
543
+ "--base-size",
544
+ type=int,
545
+ help="Base resolution size (overrides resolution-mode)",
546
+ )
547
+ parser.add_argument(
548
+ "--image-size",
549
+ type=int,
550
+ help="Image tile size (overrides resolution-mode)",
551
+ )
552
+ parser.add_argument(
553
+ "--crop-mode",
554
+ action="store_true",
555
+ help="Enable dynamic multi-tile cropping (overrides resolution-mode)",
556
+ )
557
+ parser.add_argument(
558
+ "--prompt",
559
+ default="<image>\n<|grounding|>Convert the document to markdown.",
560
+ help="Prompt for OCR (default: grounding markdown conversion)",
561
+ )
562
+ parser.add_argument("--hf-token", help="Hugging Face API token")
563
+ parser.add_argument(
564
+ "--split", default="train", help="Dataset split to use (default: train)"
565
+ )
566
+ parser.add_argument(
567
+ "--max-samples",
568
+ type=int,
569
+ help="Maximum number of samples to process (for testing)",
570
+ )
571
+ parser.add_argument(
572
+ "--private", action="store_true", help="Make output dataset private"
573
+ )
574
+ parser.add_argument(
575
+ "--shuffle",
576
+ action="store_true",
577
+ help="Shuffle the dataset before processing (useful for random sampling)",
578
+ )
579
+ parser.add_argument(
580
+ "--seed",
581
+ type=int,
582
+ default=42,
583
+ help="Random seed for shuffling (default: 42)",
584
+ )
585
+
586
+ args = parser.parse_args()
587
+
588
+ main(
589
+ input_dataset=args.input_dataset,
590
+ output_dataset=args.output_dataset,
591
+ image_column=args.image_column,
592
+ model=args.model,
593
+ resolution_mode=args.resolution_mode,
594
+ base_size=args.base_size,
595
+ image_size=args.image_size,
596
+ crop_mode=args.crop_mode if args.crop_mode else None,
597
+ prompt=args.prompt,
598
+ hf_token=args.hf_token,
599
+ split=args.split,
600
+ max_samples=args.max_samples,
601
+ private=args.private,
602
+ shuffle=args.shuffle,
603
+ seed=args.seed,
604
+ )
dots-ocr.py ADDED
@@ -0,0 +1,553 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub[hf_transfer]",
6
+ # "pillow",
7
+ # "vllm>=0.9.1",
8
+ # "tqdm",
9
+ # "toolz",
10
+ # "torch",
11
+ # ]
12
+ #
13
+ # ///
14
+
15
+ """
16
+ Convert document images to markdown using DoTS.ocr with vLLM.
17
+
18
+ DoTS.ocr is a compact 1.7B multilingual document parsing model with SOTA performance
19
+ on 100+ languages. This script uses vLLM for efficient batch processing.
20
+
21
+ Features:
22
+ - 🌍 Multilingual support (100+ languages)
23
+ - 📊 Table extraction and formatting
24
+ - 📐 Formula recognition
25
+ - 📝 Layout-aware text extraction
26
+ - 🎯 Compact model (1.7B parameters)
27
+
28
+ Model: rednote-hilab/dots.ocr
29
+ vLLM: Officially tested with 0.9.1+ (native support via PR #24645)
30
+ """
31
+
32
+ import argparse
33
+ import base64
34
+ import io
35
+ import json
36
+ import logging
37
+ import os
38
+ import sys
39
+ from typing import Any, Dict, List, Union
40
+ from datetime import datetime
41
+
42
+ import torch
43
+ from datasets import load_dataset
44
+ from huggingface_hub import DatasetCard, login
45
+ from PIL import Image
46
+ from toolz import partition_all
47
+ from tqdm.auto import tqdm
48
+ from vllm import LLM, SamplingParams
49
+
50
+ logging.basicConfig(level=logging.INFO)
51
+ logger = logging.getLogger(__name__)
52
+
53
+
54
+ # ────────────────────────────────────────────────────────────────
55
+ # DoTS OCR Prompt Templates (from official dots.ocr repo)
56
+ # Source: https://github.com/rednote-hilab/dots.ocr/blob/master/dots_ocr/utils/prompts.py
57
+ # ────────────────────────────────────────────────────────────────
58
+
59
+ PROMPT_TEMPLATES = {
60
+ "ocr": "Extract the text content from this image.",
61
+
62
+ "layout-all": """Please output the layout information from the PDF image, including each layout element's bbox, its category, and the corresponding text content within the bbox.
63
+
64
+ 1. Bbox format: [x1, y1, x2, y2]
65
+
66
+ 2. Layout Categories: The possible categories are ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title'].
67
+
68
+ 3. Text Extraction & Formatting Rules:
69
+ - Picture: For the 'Picture' category, the text field should be omitted.
70
+ - Formula: Format its text as LaTeX.
71
+ - Table: Format its text as HTML.
72
+ - All Others (Text, Title, etc.): Format their text as Markdown.
73
+
74
+ 4. Constraints:
75
+ - The output text must be the original text from the image, with no translation.
76
+ - All layout elements must be sorted according to human reading order.
77
+
78
+ 5. Final Output: The entire output must be a single JSON object.""",
79
+
80
+ "layout-only": """Please output the layout information from this PDF image, including each layout's bbox and its category. The bbox should be in the format [x1, y1, x2, y2]. The layout categories for the PDF document include ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title']. Do not output the corresponding text. The layout result should be in JSON format.""",
81
+ }
82
+
83
+
84
+ def check_cuda_availability():
85
+ """Check if CUDA is available and exit if not."""
86
+ if not torch.cuda.is_available():
87
+ logger.error("CUDA is not available. This script requires a GPU.")
88
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
89
+ sys.exit(1)
90
+ else:
91
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
92
+
93
+
94
+ def make_ocr_message(
95
+ image: Union[Image.Image, Dict[str, Any], str],
96
+ prompt: str = PROMPT_TEMPLATES["ocr"],
97
+ ) -> List[Dict]:
98
+ """Create chat message for OCR processing."""
99
+ # Convert to PIL Image if needed
100
+ if isinstance(image, Image.Image):
101
+ pil_img = image
102
+ elif isinstance(image, dict) and "bytes" in image:
103
+ pil_img = Image.open(io.BytesIO(image["bytes"]))
104
+ elif isinstance(image, str):
105
+ pil_img = Image.open(image)
106
+ else:
107
+ raise ValueError(f"Unsupported image type: {type(image)}")
108
+
109
+ # Convert to RGB
110
+ pil_img = pil_img.convert("RGB")
111
+
112
+ # Convert to base64 data URI
113
+ buf = io.BytesIO()
114
+ pil_img.save(buf, format="PNG")
115
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
116
+
117
+ # Return message in vLLM format
118
+ return [
119
+ {
120
+ "role": "user",
121
+ "content": [
122
+ {"type": "image_url", "image_url": {"url": data_uri}},
123
+ {"type": "text", "text": prompt},
124
+ ],
125
+ }
126
+ ]
127
+
128
+
129
+ def create_dataset_card(
130
+ source_dataset: str,
131
+ model: str,
132
+ num_samples: int,
133
+ processing_time: str,
134
+ batch_size: int,
135
+ max_model_len: int,
136
+ max_tokens: int,
137
+ gpu_memory_utilization: float,
138
+ image_column: str = "image",
139
+ split: str = "train",
140
+ prompt_mode: str = "general",
141
+ ) -> str:
142
+ """Create a dataset card documenting the OCR process."""
143
+ model_name = model.split("/")[-1]
144
+
145
+ return f"""---
146
+ tags:
147
+ - ocr
148
+ - document-processing
149
+ - dots-ocr
150
+ - multilingual
151
+ - markdown
152
+ - uv-script
153
+ - generated
154
+ ---
155
+
156
+ # Document OCR using {model_name}
157
+
158
+ This dataset contains OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using DoTS.ocr, a compact 1.7B multilingual model.
159
+
160
+ ## Processing Details
161
+
162
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
163
+ - **Model**: [{model}](https://huggingface.co/{model})
164
+ - **Number of Samples**: {num_samples:,}
165
+ - **Processing Time**: {processing_time}
166
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
167
+
168
+ ### Configuration
169
+
170
+ - **Image Column**: `{image_column}`
171
+ - **Output Column**: `markdown`
172
+ - **Dataset Split**: `{split}`
173
+ - **Batch Size**: {batch_size}
174
+ - **Prompt Mode**: {prompt_mode}
175
+ - **Max Model Length**: {max_model_len:,} tokens
176
+ - **Max Output Tokens**: {max_tokens:,}
177
+ - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
178
+
179
+ ## Model Information
180
+
181
+ DoTS.ocr is a compact multilingual document parsing model that excels at:
182
+ - 🌍 **100+ Languages** - Multilingual document support
183
+ - 📊 **Table extraction** - Structured data recognition
184
+ - 📐 **Formulas** - Mathematical notation preservation
185
+ - 📝 **Layout-aware** - Reading order and structure preservation
186
+ - 🎯 **Compact** - Only 1.7B parameters
187
+
188
+ ## Dataset Structure
189
+
190
+ The dataset contains all original columns plus:
191
+ - `markdown`: The extracted text in markdown format
192
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
193
+
194
+ ## Usage
195
+
196
+ ```python
197
+ from datasets import load_dataset
198
+ import json
199
+
200
+ # Load the dataset
201
+ dataset = load_dataset("{{output_dataset_id}}", split="{split}")
202
+
203
+ # Access the markdown text
204
+ for example in dataset:
205
+ print(example["markdown"])
206
+ break
207
+
208
+ # View all OCR models applied to this dataset
209
+ inference_info = json.loads(dataset[0]["inference_info"])
210
+ for info in inference_info:
211
+ print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")
212
+ ```
213
+
214
+ ## Reproduction
215
+
216
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DoTS OCR script:
217
+
218
+ ```bash
219
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr.py \\
220
+ {source_dataset} \\
221
+ <output-dataset> \\
222
+ --image-column {image_column} \\
223
+ --batch-size {batch_size} \\
224
+ --prompt-mode {prompt_mode} \\
225
+ --max-model-len {max_model_len} \\
226
+ --max-tokens {max_tokens} \\
227
+ --gpu-memory-utilization {gpu_memory_utilization}
228
+ ```
229
+
230
+ Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
231
+ """
232
+
233
+
234
+ def main(
235
+ input_dataset: str,
236
+ output_dataset: str,
237
+ image_column: str = "image",
238
+ batch_size: int = 16,
239
+ model: str = "rednote-hilab/dots.ocr",
240
+ max_model_len: int = 8192,
241
+ max_tokens: int = 8192,
242
+ gpu_memory_utilization: float = 0.8,
243
+ hf_token: str = None,
244
+ split: str = "train",
245
+ max_samples: int = None,
246
+ private: bool = False,
247
+ shuffle: bool = False,
248
+ seed: int = 42,
249
+ prompt_mode: str = "ocr",
250
+ custom_prompt: str = None,
251
+ output_column: str = "markdown",
252
+ ):
253
+ """Process images from HF dataset through DoTS.ocr model."""
254
+
255
+ # Check CUDA availability first
256
+ check_cuda_availability()
257
+
258
+ # Track processing start time
259
+ start_time = datetime.now()
260
+
261
+ # Enable HF_TRANSFER for faster downloads
262
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
263
+
264
+ # Login to HF if token provided
265
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
266
+ if HF_TOKEN:
267
+ login(token=HF_TOKEN)
268
+
269
+ # Determine prompt to use
270
+ if custom_prompt:
271
+ prompt = custom_prompt
272
+ logger.info(f"Using custom prompt: {prompt[:50]}...")
273
+ else:
274
+ prompt = PROMPT_TEMPLATES.get(prompt_mode, PROMPT_TEMPLATES["ocr"])
275
+ logger.info(f"Using prompt mode: {prompt_mode}")
276
+
277
+ # Load dataset
278
+ logger.info(f"Loading dataset: {input_dataset}")
279
+ dataset = load_dataset(input_dataset, split=split)
280
+
281
+ # Validate image column
282
+ if image_column not in dataset.column_names:
283
+ raise ValueError(
284
+ f"Column '{image_column}' not found. Available: {dataset.column_names}"
285
+ )
286
+
287
+ # Shuffle if requested
288
+ if shuffle:
289
+ logger.info(f"Shuffling dataset with seed {seed}")
290
+ dataset = dataset.shuffle(seed=seed)
291
+
292
+ # Limit samples if requested
293
+ if max_samples:
294
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
295
+ logger.info(f"Limited to {len(dataset)} samples")
296
+
297
+ # Initialize vLLM model
298
+ logger.info(f"Initializing vLLM with model: {model}")
299
+ logger.info("This may take a few minutes on first run...")
300
+ llm = LLM(
301
+ model=model,
302
+ trust_remote_code=True,
303
+ max_model_len=max_model_len,
304
+ gpu_memory_utilization=gpu_memory_utilization,
305
+ )
306
+
307
+ sampling_params = SamplingParams(
308
+ temperature=0.0, # Deterministic for OCR
309
+ max_tokens=max_tokens,
310
+ )
311
+
312
+ logger.info(f"Processing {len(dataset)} images in batches of {batch_size}")
313
+ logger.info(f"Output will be written to column: {output_column}")
314
+
315
+ # Process images in batches
316
+ all_outputs = []
317
+
318
+ for batch_indices in tqdm(
319
+ partition_all(batch_size, range(len(dataset))),
320
+ total=(len(dataset) + batch_size - 1) // batch_size,
321
+ desc="DoTS.ocr processing",
322
+ ):
323
+ batch_indices = list(batch_indices)
324
+ batch_images = [dataset[i][image_column] for i in batch_indices]
325
+
326
+ try:
327
+ # Create messages for batch
328
+ batch_messages = [make_ocr_message(img, prompt) for img in batch_images]
329
+
330
+ # Process with vLLM
331
+ outputs = llm.chat(batch_messages, sampling_params)
332
+
333
+ # Extract outputs
334
+ for output in outputs:
335
+ text = output.outputs[0].text.strip()
336
+ all_outputs.append(text)
337
+
338
+ except Exception as e:
339
+ logger.error(f"Error processing batch: {e}")
340
+ # Add error placeholders for failed batch
341
+ all_outputs.extend(["[OCR ERROR]"] * len(batch_images))
342
+
343
+ # Calculate processing time
344
+ processing_duration = datetime.now() - start_time
345
+ processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
346
+
347
+ # Add output column to dataset
348
+ logger.info(f"Adding '{output_column}' column to dataset")
349
+ dataset = dataset.add_column(output_column, all_outputs)
350
+
351
+ # Handle inference_info tracking (for multi-model comparisons)
352
+ inference_entry = {
353
+ "model_id": model,
354
+ "column_name": output_column,
355
+ "timestamp": datetime.now().isoformat(),
356
+ "prompt_mode": prompt_mode if not custom_prompt else "custom",
357
+ }
358
+
359
+ if "inference_info" in dataset.column_names:
360
+ # Append to existing inference info
361
+ logger.info("Updating existing inference_info column")
362
+
363
+ def update_inference_info(example):
364
+ try:
365
+ existing_info = json.loads(example["inference_info"]) if example["inference_info"] else []
366
+ except (json.JSONDecodeError, TypeError):
367
+ existing_info = []
368
+
369
+ existing_info.append(inference_entry)
370
+ return {"inference_info": json.dumps(existing_info)}
371
+
372
+ dataset = dataset.map(update_inference_info)
373
+ else:
374
+ # Create new inference_info column
375
+ logger.info("Creating new inference_info column")
376
+ inference_list = [json.dumps([inference_entry])] * len(dataset)
377
+ dataset = dataset.add_column("inference_info", inference_list)
378
+
379
+ # Push to hub
380
+ logger.info(f"Pushing to {output_dataset}")
381
+ dataset.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
382
+
383
+ # Create and push dataset card
384
+ logger.info("Creating dataset card")
385
+ card_content = create_dataset_card(
386
+ source_dataset=input_dataset,
387
+ model=model,
388
+ num_samples=len(dataset),
389
+ processing_time=processing_time_str,
390
+ batch_size=batch_size,
391
+ max_model_len=max_model_len,
392
+ max_tokens=max_tokens,
393
+ gpu_memory_utilization=gpu_memory_utilization,
394
+ image_column=image_column,
395
+ split=split,
396
+ prompt_mode=prompt_mode if not custom_prompt else "custom",
397
+ )
398
+
399
+ card = DatasetCard(card_content)
400
+ card.push_to_hub(output_dataset, token=HF_TOKEN)
401
+
402
+ logger.info("✅ DoTS.ocr processing complete!")
403
+ logger.info(f"Dataset available at: https://huggingface.co/datasets/{output_dataset}")
404
+ logger.info(f"Processing time: {processing_time_str}")
405
+
406
+
407
+ if __name__ == "__main__":
408
+ # Show example usage if no arguments
409
+ if len(sys.argv) == 1:
410
+ print("=" * 80)
411
+ print("DoTS.ocr Document Processing")
412
+ print("=" * 80)
413
+ print("\nCompact 1.7B multilingual OCR model supporting 100+ languages")
414
+ print("\nFeatures:")
415
+ print("- 🌍 Multilingual support (100+ languages)")
416
+ print("- ⚡ Fast processing with vLLM (2-3x speedup)")
417
+ print("- 📊 Table extraction and formatting")
418
+ print("- 📐 Formula recognition")
419
+ print("- 📝 Layout-aware text extraction")
420
+ print("\nExample usage:")
421
+ print("\n1. Basic OCR:")
422
+ print(" uv run dots-ocr.py input-dataset output-dataset")
423
+ print("\n2. With custom settings:")
424
+ print(" uv run dots-ocr.py docs analyzed-docs --batch-size 20 --max-samples 100")
425
+ print("\n3. Layout analysis with structure:")
426
+ print(" uv run dots-ocr.py papers analyzed-structure --prompt-mode layout-all")
427
+ print("\n4. Layout detection only (no text):")
428
+ print(" uv run dots-ocr.py docs layout-info --prompt-mode layout-only")
429
+ print("\n5. Running on HF Jobs:")
430
+ print(" hf jobs uv run --flavor l4x1 \\")
431
+ print(" -e HF_TOKEN=$(python3 -c \"from huggingface_hub import get_token; print(get_token())\") \\")
432
+ print(" -e HF_HUB_ENABLE_HF_TRANSFER=1 \\")
433
+ print(" https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr.py \\")
434
+ print(" input-dataset output-dataset")
435
+ print("\n" + "=" * 80)
436
+ print("\nFor full help, run: uv run dots-ocr.py --help")
437
+ sys.exit(0)
438
+
439
+ parser = argparse.ArgumentParser(
440
+ description="Document OCR using DoTS.ocr (1.7B multilingual model)",
441
+ formatter_class=argparse.RawDescriptionHelpFormatter,
442
+ epilog="""
443
+ Prompt Modes (official DoTS.ocr prompts):
444
+ ocr - Simple text extraction (default)
445
+ layout-all - Layout analysis with bboxes, categories, and text (JSON output)
446
+ layout-only - Layout detection with bboxes and categories only (JSON output)
447
+
448
+ Examples:
449
+ # Basic text OCR (default)
450
+ uv run dots-ocr.py my-docs analyzed-docs
451
+
452
+ # Full layout analysis with structure
453
+ uv run dots-ocr.py papers structured --prompt-mode layout-all
454
+
455
+ # Random sampling for testing
456
+ uv run dots-ocr.py large-dataset test --max-samples 50 --shuffle
457
+ """,
458
+ )
459
+
460
+ parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
461
+ parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
462
+ parser.add_argument(
463
+ "--image-column",
464
+ default="image",
465
+ help="Column containing images (default: image)",
466
+ )
467
+ parser.add_argument(
468
+ "--batch-size",
469
+ type=int,
470
+ default=16,
471
+ help="Batch size for processing (default: 16, DoTS handles 16-30 well)",
472
+ )
473
+ parser.add_argument(
474
+ "--model",
475
+ default="rednote-hilab/dots.ocr",
476
+ help="Model to use (default: rednote-hilab/dots.ocr)",
477
+ )
478
+ parser.add_argument(
479
+ "--max-model-len",
480
+ type=int,
481
+ default=8192,
482
+ help="Maximum model context length (default: 8192)",
483
+ )
484
+ parser.add_argument(
485
+ "--max-tokens",
486
+ type=int,
487
+ default=8192,
488
+ help="Maximum tokens to generate (default: 8192)",
489
+ )
490
+ parser.add_argument(
491
+ "--gpu-memory-utilization",
492
+ type=float,
493
+ default=0.8,
494
+ help="GPU memory utilization (default: 0.8)",
495
+ )
496
+ parser.add_argument("--hf-token", help="Hugging Face API token")
497
+ parser.add_argument(
498
+ "--split", default="train", help="Dataset split to use (default: train)"
499
+ )
500
+ parser.add_argument(
501
+ "--max-samples",
502
+ type=int,
503
+ help="Maximum number of samples to process (for testing)",
504
+ )
505
+ parser.add_argument(
506
+ "--private", action="store_true", help="Make output dataset private"
507
+ )
508
+ parser.add_argument(
509
+ "--shuffle", action="store_true", help="Shuffle dataset before processing"
510
+ )
511
+ parser.add_argument(
512
+ "--seed",
513
+ type=int,
514
+ default=42,
515
+ help="Random seed for shuffling (default: 42)",
516
+ )
517
+ parser.add_argument(
518
+ "--prompt-mode",
519
+ choices=list(PROMPT_TEMPLATES.keys()),
520
+ default="ocr",
521
+ help=f"Prompt template to use: {', '.join(PROMPT_TEMPLATES.keys())} (default: ocr)",
522
+ )
523
+ parser.add_argument(
524
+ "--custom-prompt",
525
+ help="Custom prompt text (overrides --prompt-mode)",
526
+ )
527
+ parser.add_argument(
528
+ "--output-column",
529
+ default="markdown",
530
+ help="Column name for output text (default: markdown)",
531
+ )
532
+
533
+ args = parser.parse_args()
534
+
535
+ main(
536
+ input_dataset=args.input_dataset,
537
+ output_dataset=args.output_dataset,
538
+ image_column=args.image_column,
539
+ batch_size=args.batch_size,
540
+ model=args.model,
541
+ max_model_len=args.max_model_len,
542
+ max_tokens=args.max_tokens,
543
+ gpu_memory_utilization=args.gpu_memory_utilization,
544
+ hf_token=args.hf_token,
545
+ split=args.split,
546
+ max_samples=args.max_samples,
547
+ private=args.private,
548
+ shuffle=args.shuffle,
549
+ seed=args.seed,
550
+ prompt_mode=args.prompt_mode,
551
+ custom_prompt=args.custom_prompt,
552
+ output_column=args.output_column,
553
+ )
lighton-ocr.py ADDED
@@ -0,0 +1,639 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub[hf_transfer]",
6
+ # "pillow",
7
+ # "vllm",
8
+ # "tqdm",
9
+ # "toolz",
10
+ # "torch",
11
+ # "triton-kernels @ git+https://github.com/triton-lang/[email protected]#subdirectory=python/triton_kernels",
12
+ # ]
13
+ #
14
+ # [[tool.uv.index]]
15
+ # url = "https://wheels.vllm.ai/nightly"
16
+ #
17
+ # [tool.uv]
18
+ # prerelease = "allow"
19
+ # ///
20
+
21
+ """
22
+ Convert document images to markdown using LightOnOCR with vLLM.
23
+
24
+ LightOnOCR is a compact 1B multilingual OCR model optimized for production speed.
25
+ Combines Pixtral ViT encoder with Qwen3 language model for efficient document parsing.
26
+
27
+ NOTE: Requires vLLM nightly wheels for LightOnOCR support. First run may take
28
+ a few minutes to download and install dependencies.
29
+
30
+ Features:
31
+ - ⚡ Fastest: 5.71 pages/sec on H100 GPU
32
+ - 🎯 Compact: Only 1B parameters
33
+ - 🌍 Multilingual with European language optimization
34
+ - 📐 LaTeX formula recognition
35
+ - 📊 Table extraction (markdown format)
36
+ - 📝 Document structure preservation
37
+ - 🔤 3 vocabulary sizes (151k/32k/16k tokens)
38
+
39
+ Model: lightonai/LightOnOCR-1B-1025
40
+ vLLM: Requires nightly build from main branch
41
+ Performance: 76.1% overall benchmark score
42
+ """
43
+
44
+ import argparse
45
+ import base64
46
+ import io
47
+ import json
48
+ import logging
49
+ import os
50
+ import sys
51
+ from typing import Any, Dict, List, Union
52
+ from datetime import datetime
53
+
54
+ import torch
55
+ from datasets import load_dataset
56
+ from huggingface_hub import DatasetCard, login
57
+ from PIL import Image
58
+ from toolz import partition_all
59
+ from tqdm.auto import tqdm
60
+ from vllm import LLM, SamplingParams
61
+
62
+ logging.basicConfig(level=logging.INFO)
63
+ logger = logging.getLogger(__name__)
64
+
65
+
66
+ # Model variants with different vocabulary sizes
67
+ MODEL_VARIANTS = {
68
+ "151k": "lightonai/LightOnOCR-1B-1025", # Full vocabulary (default)
69
+ "32k": "lightonai/LightOnOCR-0.9B-32k-1025", # European languages optimized
70
+ "16k": "lightonai/LightOnOCR-0.9B-16k-1025", # European languages optimized
71
+ }
72
+
73
+
74
+ def check_cuda_availability():
75
+ """Check if CUDA is available and exit if not."""
76
+ if not torch.cuda.is_available():
77
+ logger.error("CUDA is not available. This script requires a GPU.")
78
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
79
+ sys.exit(1)
80
+ else:
81
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
82
+
83
+
84
+ def resize_image_to_target(image: Image.Image, target_size: int = 1540) -> Image.Image:
85
+ """
86
+ Resize image so longest dimension is target_size while maintaining aspect ratio.
87
+
88
+ LightOnOCR was trained with images at 1540px max resolution and 200 DPI.
89
+ """
90
+ width, height = image.size
91
+
92
+ # If image is already smaller, don't upscale
93
+ if max(width, height) <= target_size:
94
+ return image
95
+
96
+ # Calculate new dimensions maintaining aspect ratio
97
+ if width > height:
98
+ new_width = target_size
99
+ new_height = int(height * (target_size / width))
100
+ else:
101
+ new_height = target_size
102
+ new_width = int(width * (target_size / height))
103
+
104
+ return image.resize((new_width, new_height), Image.Resampling.LANCZOS)
105
+
106
+
107
+ def make_ocr_message(
108
+ image: Union[Image.Image, Dict[str, Any], str],
109
+ resize: bool = True,
110
+ target_size: int = 1540,
111
+ ) -> List[Dict]:
112
+ """
113
+ Create chat message for OCR processing.
114
+
115
+ LightOnOCR was trained with 1540px max resolution at 200 DPI for optimal results.
116
+ """
117
+ # Convert to PIL Image if needed
118
+ if isinstance(image, Image.Image):
119
+ pil_img = image
120
+ elif isinstance(image, dict) and "bytes" in image:
121
+ pil_img = Image.open(io.BytesIO(image["bytes"]))
122
+ elif isinstance(image, str):
123
+ pil_img = Image.open(image)
124
+ else:
125
+ raise ValueError(f"Unsupported image type: {type(image)}")
126
+
127
+ # Convert to RGB
128
+ pil_img = pil_img.convert("RGB")
129
+
130
+ # Resize to optimal dimensions for LightOnOCR
131
+ if resize:
132
+ pil_img = resize_image_to_target(pil_img, target_size)
133
+ logger.debug(f"Resized image to {pil_img.size}")
134
+
135
+ # Convert to base64 data URI
136
+ buf = io.BytesIO()
137
+ pil_img.save(buf, format="PNG")
138
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
139
+
140
+ # LightOnOCR uses message format with empty text prompt before image
141
+ # (matching official demo: text first, then image)
142
+ return [
143
+ {
144
+ "role": "user",
145
+ "content": [
146
+ {"type": "text", "text": ""},
147
+ {"type": "image_url", "image_url": {"url": data_uri}},
148
+ ],
149
+ }
150
+ ]
151
+
152
+
153
+ def create_dataset_card(
154
+ source_dataset: str,
155
+ model: str,
156
+ vocab_size: str,
157
+ num_samples: int,
158
+ processing_time: str,
159
+ batch_size: int,
160
+ max_model_len: int,
161
+ max_tokens: int,
162
+ gpu_memory_utilization: float,
163
+ temperature: float,
164
+ top_p: float,
165
+ target_size: int,
166
+ image_column: str = "image",
167
+ split: str = "train",
168
+ ) -> str:
169
+ """Create a dataset card documenting the OCR process."""
170
+ model_name = model.split("/")[-1]
171
+
172
+ return f"""---
173
+ tags:
174
+ - ocr
175
+ - document-processing
176
+ - lighton-ocr
177
+ - markdown
178
+ - uv-script
179
+ - generated
180
+ ---
181
+
182
+ # Document OCR using {model_name}
183
+
184
+ This dataset contains OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using LightOnOCR, a fast and compact 1B OCR model.
185
+
186
+ ## Processing Details
187
+
188
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
189
+ - **Model**: [{model}](https://huggingface.co/{model})
190
+ - **Vocabulary Size**: {vocab_size} tokens
191
+ - **Number of Samples**: {num_samples:,}
192
+ - **Processing Time**: {processing_time}
193
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
194
+
195
+ ### Configuration
196
+
197
+ - **Image Column**: `{image_column}`
198
+ - **Output Column**: `markdown`
199
+ - **Dataset Split**: `{split}`
200
+ - **Batch Size**: {batch_size}
201
+ - **Target Image Size**: {target_size}px (longest dimension)
202
+ - **Max Model Length**: {max_model_len:,} tokens
203
+ - **Max Output Tokens**: {max_tokens:,}
204
+ - **Temperature**: {temperature}
205
+ - **Top P**: {top_p}
206
+ - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
207
+
208
+ ## Model Information
209
+
210
+ LightOnOCR is a fast, compact OCR model that excels at:
211
+ - ⚡ **Production Speed** - 5.71 pages/second on H100 GPU
212
+ - 🎯 **Compact Size** - Only 1B parameters
213
+ - 📐 **LaTeX formulas** - Mathematical notation in LaTeX format
214
+ - 📊 **Tables** - Extracted and formatted as markdown
215
+ - 📝 **Document structure** - Hierarchy and layout preservation
216
+ - 🌍 **Multilingual** - Optimized for European languages
217
+ - 🔤 **Flexible vocabulary** - 151k/32k/16k token variants
218
+
219
+ ### Vocabulary Variants
220
+
221
+ - **151k tokens**: Full vocabulary, supports all languages
222
+ - **32k tokens**: European languages optimized (~12% faster decoding)
223
+ - **16k tokens**: European languages optimized (~12% faster decoding)
224
+
225
+ ## Dataset Structure
226
+
227
+ The dataset contains all original columns plus:
228
+ - `markdown`: The extracted text in markdown format with LaTeX formulas
229
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
230
+
231
+ ## Usage
232
+
233
+ ```python
234
+ from datasets import load_dataset
235
+ import json
236
+
237
+ # Load the dataset
238
+ dataset = load_dataset("{{output_dataset_id}}", split="{split}")
239
+
240
+ # Access the markdown text
241
+ for example in dataset:
242
+ print(example["markdown"])
243
+ break
244
+
245
+ # View all OCR models applied to this dataset
246
+ inference_info = json.loads(dataset[0]["inference_info"])
247
+ for info in inference_info:
248
+ print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")
249
+ ```
250
+
251
+ ## Reproduction
252
+
253
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) LightOnOCR script:
254
+
255
+ ```bash
256
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr.py \\
257
+ {source_dataset} \\
258
+ <output-dataset> \\
259
+ --vocab-size {vocab_size} \\
260
+ --image-column {image_column} \\
261
+ --batch-size {batch_size}
262
+ ```
263
+
264
+ ## Performance
265
+
266
+ - **Processing Speed**: ~{num_samples / (float(processing_time.split()[0]) * 60):.2f} images/second
267
+ - **Benchmark Score**: 76.1% overall (across diverse document types)
268
+ - **Optimization**: Native resolution ViT + lightweight decoder
269
+
270
+ Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
271
+ """
272
+
273
+
274
+ def main(
275
+ input_dataset: str,
276
+ output_dataset: str,
277
+ image_column: str = "image",
278
+ batch_size: int = 16,
279
+ vocab_size: str = "151k",
280
+ max_model_len: int = 8192,
281
+ max_tokens: int = 6500,
282
+ temperature: float = 0.2,
283
+ top_p: float = 0.9,
284
+ gpu_memory_utilization: float = 0.8,
285
+ target_size: int = 1540,
286
+ no_resize: bool = False,
287
+ hf_token: str = None,
288
+ split: str = "train",
289
+ max_samples: int = None,
290
+ private: bool = False,
291
+ shuffle: bool = False,
292
+ seed: int = 42,
293
+ output_column: str = "markdown",
294
+ ):
295
+ """Process images from HF dataset through LightOnOCR model."""
296
+
297
+ # Check CUDA availability first
298
+ check_cuda_availability()
299
+
300
+ # Track processing start time
301
+ start_time = datetime.now()
302
+
303
+ # Enable HF_TRANSFER for faster downloads
304
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
305
+
306
+ # Login to HF if token provided
307
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
308
+ if HF_TOKEN:
309
+ login(token=HF_TOKEN)
310
+
311
+ # Get model ID from vocabulary size
312
+ if vocab_size not in MODEL_VARIANTS:
313
+ raise ValueError(
314
+ f"Invalid vocab_size '{vocab_size}'. Choose from: {list(MODEL_VARIANTS.keys())}"
315
+ )
316
+ model = MODEL_VARIANTS[vocab_size]
317
+ logger.info(f"Using model: {model} ({vocab_size} vocabulary)")
318
+
319
+ # Load dataset
320
+ logger.info(f"Loading dataset: {input_dataset}")
321
+ dataset = load_dataset(input_dataset, split=split)
322
+
323
+ # Validate image column
324
+ if image_column not in dataset.column_names:
325
+ raise ValueError(
326
+ f"Column '{image_column}' not found. Available: {dataset.column_names}"
327
+ )
328
+
329
+ # Shuffle if requested
330
+ if shuffle:
331
+ logger.info(f"Shuffling dataset with seed {seed}")
332
+ dataset = dataset.shuffle(seed=seed)
333
+
334
+ # Limit samples if requested
335
+ if max_samples:
336
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
337
+ logger.info(f"Limited to {len(dataset)} samples")
338
+
339
+ # Initialize vLLM model
340
+ logger.info(f"Initializing vLLM with LightOnOCR")
341
+ logger.info("This may take a few minutes on first run...")
342
+ llm = LLM(
343
+ model=model,
344
+ trust_remote_code=True,
345
+ max_model_len=max_model_len,
346
+ gpu_memory_utilization=gpu_memory_utilization,
347
+ limit_mm_per_prompt={"image": 1}, # One image per prompt
348
+ enforce_eager=False, # Use torch.compile for better performance
349
+ )
350
+
351
+ # LightOnOCR recommended sampling parameters
352
+ sampling_params = SamplingParams(
353
+ temperature=temperature,
354
+ top_p=top_p,
355
+ max_tokens=max_tokens,
356
+ )
357
+
358
+ logger.info(f"Processing {len(dataset)} images in batches of {batch_size}")
359
+ logger.info(f"Output will be written to column: {output_column}")
360
+ if not no_resize:
361
+ logger.info(f"Images will be resized to {target_size}px (longest dimension)")
362
+
363
+ # Process images in batches
364
+ all_outputs = []
365
+
366
+ for batch_indices in tqdm(
367
+ partition_all(batch_size, range(len(dataset))),
368
+ total=(len(dataset) + batch_size - 1) // batch_size,
369
+ desc="LightOnOCR processing",
370
+ ):
371
+ batch_indices = list(batch_indices)
372
+ batch_images = [dataset[i][image_column] for i in batch_indices]
373
+
374
+ try:
375
+ # Create messages for batch
376
+ batch_messages = [
377
+ make_ocr_message(img, resize=not no_resize, target_size=target_size)
378
+ for img in batch_images
379
+ ]
380
+
381
+ # Process with vLLM
382
+ outputs = llm.chat(batch_messages, sampling_params)
383
+
384
+ # Extract outputs
385
+ for output in outputs:
386
+ text = output.outputs[0].text.strip()
387
+ all_outputs.append(text)
388
+
389
+ except Exception as e:
390
+ logger.error(f"Error processing batch: {e}")
391
+ # Add error placeholders for failed batch
392
+ all_outputs.extend(["[OCR ERROR]"] * len(batch_images))
393
+
394
+ # Calculate processing time
395
+ processing_duration = datetime.now() - start_time
396
+ processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
397
+
398
+ # Add output column to dataset
399
+ logger.info(f"Adding '{output_column}' column to dataset")
400
+ dataset = dataset.add_column(output_column, all_outputs)
401
+
402
+ # Handle inference_info tracking (for multi-model comparisons)
403
+ inference_entry = {
404
+ "model_id": model,
405
+ "model_name": "LightOnOCR",
406
+ "vocab_size": vocab_size,
407
+ "column_name": output_column,
408
+ "timestamp": datetime.now().isoformat(),
409
+ "temperature": temperature,
410
+ "top_p": top_p,
411
+ "max_tokens": max_tokens,
412
+ "target_size": target_size if not no_resize else "original",
413
+ }
414
+
415
+ if "inference_info" in dataset.column_names:
416
+ # Append to existing inference info
417
+ logger.info("Updating existing inference_info column")
418
+
419
+ def update_inference_info(example):
420
+ try:
421
+ existing_info = json.loads(example["inference_info"]) if example["inference_info"] else []
422
+ except (json.JSONDecodeError, TypeError):
423
+ existing_info = []
424
+
425
+ existing_info.append(inference_entry)
426
+ return {"inference_info": json.dumps(existing_info)}
427
+
428
+ dataset = dataset.map(update_inference_info)
429
+ else:
430
+ # Create new inference_info column
431
+ logger.info("Creating new inference_info column")
432
+ inference_list = [json.dumps([inference_entry])] * len(dataset)
433
+ dataset = dataset.add_column("inference_info", inference_list)
434
+
435
+ # Push to hub
436
+ logger.info(f"Pushing to {output_dataset}")
437
+ dataset.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
438
+
439
+ # Create and push dataset card
440
+ logger.info("Creating dataset card")
441
+ card_content = create_dataset_card(
442
+ source_dataset=input_dataset,
443
+ model=model,
444
+ vocab_size=vocab_size,
445
+ num_samples=len(dataset),
446
+ processing_time=processing_time_str,
447
+ batch_size=batch_size,
448
+ max_model_len=max_model_len,
449
+ max_tokens=max_tokens,
450
+ gpu_memory_utilization=gpu_memory_utilization,
451
+ temperature=temperature,
452
+ top_p=top_p,
453
+ target_size=target_size,
454
+ image_column=image_column,
455
+ split=split,
456
+ )
457
+
458
+ card = DatasetCard(card_content)
459
+ card.push_to_hub(output_dataset, token=HF_TOKEN)
460
+
461
+ logger.info("✅ LightOnOCR processing complete!")
462
+ logger.info(f"Dataset available at: https://huggingface.co/datasets/{output_dataset}")
463
+ logger.info(f"Processing time: {processing_time_str}")
464
+ logger.info(f"Processing speed: {len(dataset) / processing_duration.total_seconds():.2f} images/sec")
465
+
466
+
467
+ if __name__ == "__main__":
468
+ # Show example usage if no arguments
469
+ if len(sys.argv) == 1:
470
+ print("=" * 80)
471
+ print("LightOnOCR Document Processing")
472
+ print("=" * 80)
473
+ print("\nFast, compact 1B OCR model for production workloads")
474
+ print("\nFeatures:")
475
+ print("- ⚡ Fastest processing: 5.71 pages/sec on H100")
476
+ print("- 🎯 Compact: Only 1B parameters")
477
+ print("- 🌍 Multilingual with European language optimization")
478
+ print("- 📐 LaTeX formula recognition")
479
+ print("- 📊 Table extraction (markdown format)")
480
+ print("- 🔤 3 vocabulary sizes for speed/quality tradeoffs")
481
+ print("\nExample usage:")
482
+ print("\n1. Basic OCR (full vocabulary):")
483
+ print(" uv run lighton-ocr.py input-dataset output-dataset")
484
+ print("\n2. European languages optimized (faster):")
485
+ print(" uv run lighton-ocr.py docs results --vocab-size 32k")
486
+ print("\n3. Custom batch size for performance:")
487
+ print(" uv run lighton-ocr.py docs results --batch-size 32")
488
+ print("\n4. Test with small sample:")
489
+ print(" uv run lighton-ocr.py large-dataset test --max-samples 50 --shuffle")
490
+ print("\n5. Original image size (no resize):")
491
+ print(" uv run lighton-ocr.py docs output --no-resize")
492
+ print("\n6. Running on HF Jobs:")
493
+ print(" hf jobs uv run --flavor l4x1 \\")
494
+ print(" -e HF_TOKEN=$(python3 -c \"from huggingface_hub import get_token; print(get_token())\") \\")
495
+ print(" -e HF_HUB_ENABLE_HF_TRANSFER=1 \\")
496
+ print(" https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr.py \\")
497
+ print(" input-dataset output-dataset --vocab-size 32k")
498
+ print("\n" + "=" * 80)
499
+ print("\nVocabulary Size Options:")
500
+ print(" 151k - Full vocabulary (all languages)")
501
+ print(" 32k - European languages (~12% faster)")
502
+ print(" 16k - European languages (~12% faster)")
503
+ print("\nFor full help, run: uv run lighton-ocr.py --help")
504
+ sys.exit(0)
505
+
506
+ parser = argparse.ArgumentParser(
507
+ description="Document OCR using LightOnOCR (fast 1B model)",
508
+ formatter_class=argparse.RawDescriptionHelpFormatter,
509
+ epilog="""
510
+ Vocabulary Size Options:
511
+ 151k Full vocabulary supporting all languages (default)
512
+ 32k European languages optimized (~12% faster decoding)
513
+ 16k European languages optimized (~12% faster decoding)
514
+
515
+ Examples:
516
+ # Basic text OCR with full vocabulary
517
+ uv run lighton-ocr.py my-docs analyzed-docs
518
+
519
+ # Fast processing for European languages
520
+ uv run lighton-ocr.py papers results --vocab-size 32k
521
+
522
+ # Test with random sampling
523
+ uv run lighton-ocr.py large-dataset test --max-samples 50 --shuffle
524
+
525
+ # Custom batch size for GPU optimization
526
+ uv run lighton-ocr.py dataset output --batch-size 32 --gpu-memory-utilization 0.9
527
+ """,
528
+ )
529
+
530
+ parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
531
+ parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
532
+ parser.add_argument(
533
+ "--image-column",
534
+ default="image",
535
+ help="Column containing images (default: image)",
536
+ )
537
+ parser.add_argument(
538
+ "--batch-size",
539
+ type=int,
540
+ default=16,
541
+ help="Batch size for processing (default: 16)",
542
+ )
543
+ parser.add_argument(
544
+ "--vocab-size",
545
+ default="151k",
546
+ choices=list(MODEL_VARIANTS.keys()),
547
+ help="Vocabulary size variant (default: 151k)",
548
+ )
549
+ parser.add_argument(
550
+ "--max-model-len",
551
+ type=int,
552
+ default=8192,
553
+ help="Maximum model context length (default: 8192)",
554
+ )
555
+ parser.add_argument(
556
+ "--max-tokens",
557
+ type=int,
558
+ default=6500,
559
+ help="Maximum tokens to generate (default: 6500)",
560
+ )
561
+ parser.add_argument(
562
+ "--temperature",
563
+ type=float,
564
+ default=0.2,
565
+ help="Sampling temperature (default: 0.2)",
566
+ )
567
+ parser.add_argument(
568
+ "--top-p",
569
+ type=float,
570
+ default=0.9,
571
+ help="Top-p sampling parameter (default: 0.9)",
572
+ )
573
+ parser.add_argument(
574
+ "--gpu-memory-utilization",
575
+ type=float,
576
+ default=0.8,
577
+ help="GPU memory utilization (default: 0.8)",
578
+ )
579
+ parser.add_argument(
580
+ "--target-size",
581
+ type=int,
582
+ default=1540,
583
+ help="Target size for longest image dimension in pixels (default: 1540, matching training)",
584
+ )
585
+ parser.add_argument(
586
+ "--no-resize",
587
+ action="store_true",
588
+ help="Don't resize images (use original size)",
589
+ )
590
+ parser.add_argument("--hf-token", help="Hugging Face API token")
591
+ parser.add_argument(
592
+ "--split", default="train", help="Dataset split to use (default: train)"
593
+ )
594
+ parser.add_argument(
595
+ "--max-samples",
596
+ type=int,
597
+ help="Maximum number of samples to process (for testing)",
598
+ )
599
+ parser.add_argument(
600
+ "--private", action="store_true", help="Make output dataset private"
601
+ )
602
+ parser.add_argument(
603
+ "--shuffle", action="store_true", help="Shuffle dataset before processing"
604
+ )
605
+ parser.add_argument(
606
+ "--seed",
607
+ type=int,
608
+ default=42,
609
+ help="Random seed for shuffling (default: 42)",
610
+ )
611
+ parser.add_argument(
612
+ "--output-column",
613
+ default="markdown",
614
+ help="Column name for output text (default: markdown)",
615
+ )
616
+
617
+ args = parser.parse_args()
618
+
619
+ main(
620
+ input_dataset=args.input_dataset,
621
+ output_dataset=args.output_dataset,
622
+ image_column=args.image_column,
623
+ batch_size=args.batch_size,
624
+ vocab_size=args.vocab_size,
625
+ max_model_len=args.max_model_len,
626
+ max_tokens=args.max_tokens,
627
+ temperature=args.temperature,
628
+ top_p=args.top_p,
629
+ gpu_memory_utilization=args.gpu_memory_utilization,
630
+ target_size=args.target_size,
631
+ no_resize=args.no_resize,
632
+ hf_token=args.hf_token,
633
+ split=args.split,
634
+ max_samples=args.max_samples,
635
+ private=args.private,
636
+ shuffle=args.shuffle,
637
+ seed=args.seed,
638
+ output_column=args.output_column,
639
+ )
nanonets-ocr.py ADDED
@@ -0,0 +1,507 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub[hf_transfer]",
6
+ # "pillow",
7
+ # "vllm",
8
+ # "tqdm",
9
+ # "toolz",
10
+ # "torch", # Added for CUDA check
11
+ # ]
12
+ #
13
+ # ///
14
+
15
+ """
16
+ Convert document images to markdown using Nanonets-OCR-s with vLLM.
17
+
18
+ This script processes images through the Nanonets-OCR-s model to extract
19
+ text and structure as markdown, ideal for document understanding tasks.
20
+
21
+ Features:
22
+ - LaTeX equation recognition
23
+ - Table extraction and formatting
24
+ - Document structure preservation
25
+ - Signature and watermark detection
26
+ """
27
+
28
+ import argparse
29
+ import base64
30
+ import io
31
+ import json
32
+ import logging
33
+ import os
34
+ import sys
35
+ from typing import Any, Dict, List, Union
36
+
37
+ import torch
38
+ from datasets import load_dataset
39
+ from huggingface_hub import DatasetCard, login
40
+ from PIL import Image
41
+ from toolz import partition_all
42
+ from tqdm.auto import tqdm
43
+ from vllm import LLM, SamplingParams
44
+ from datetime import datetime
45
+
46
+ logging.basicConfig(level=logging.INFO)
47
+ logger = logging.getLogger(__name__)
48
+
49
+
50
+ def check_cuda_availability():
51
+ """Check if CUDA is available and exit if not."""
52
+ if not torch.cuda.is_available():
53
+ logger.error("CUDA is not available. This script requires a GPU.")
54
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
55
+ sys.exit(1)
56
+ else:
57
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
58
+
59
+
60
+ def make_ocr_message(
61
+ image: Union[Image.Image, Dict[str, Any], str],
62
+ prompt: str = "Extract the text from the above document as if you were reading it naturally. Return the tables in html format. Return the equations in LaTeX representation. If there is an image in the document and image caption is not present, add a small description of the image inside the <img></img> tag; otherwise, add the image caption inside <img></img>. Watermarks should be wrapped in brackets. Ex: <watermark>OFFICIAL COPY</watermark>. Page numbers should be wrapped in brackets. Ex: <page_number>14</page_number> or <page_number>9/22</page_number>. Prefer using ☐ and ☑ for check boxes.",
63
+ ) -> List[Dict]:
64
+ """Create chat message for OCR processing."""
65
+ # Convert to PIL Image if needed
66
+ if isinstance(image, Image.Image):
67
+ pil_img = image
68
+ elif isinstance(image, dict) and "bytes" in image:
69
+ pil_img = Image.open(io.BytesIO(image["bytes"]))
70
+ elif isinstance(image, str):
71
+ pil_img = Image.open(image)
72
+ else:
73
+ raise ValueError(f"Unsupported image type: {type(image)}")
74
+
75
+ # Convert to base64 data URI
76
+ buf = io.BytesIO()
77
+ pil_img.save(buf, format="PNG")
78
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
79
+
80
+ # Return message in vLLM format
81
+ return [
82
+ {
83
+ "role": "user",
84
+ "content": [
85
+ {"type": "image_url", "image_url": {"url": data_uri}},
86
+ {"type": "text", "text": prompt},
87
+ ],
88
+ }
89
+ ]
90
+
91
+
92
+ def create_dataset_card(
93
+ source_dataset: str,
94
+ model: str,
95
+ num_samples: int,
96
+ processing_time: str,
97
+ batch_size: int,
98
+ max_model_len: int,
99
+ max_tokens: int,
100
+ gpu_memory_utilization: float,
101
+ image_column: str = "image",
102
+ split: str = "train",
103
+ ) -> str:
104
+ """Create a dataset card documenting the OCR process."""
105
+ model_name = model.split("/")[-1]
106
+
107
+ return f"""---
108
+ viewer: false
109
+ tags:
110
+ - ocr
111
+ - document-processing
112
+ - nanonets
113
+ - markdown
114
+ - uv-script
115
+ - generated
116
+ ---
117
+
118
+ # Document OCR using {model_name}
119
+
120
+ This dataset contains markdown-formatted OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using Nanonets-OCR-s.
121
+
122
+ ## Processing Details
123
+
124
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
125
+ - **Model**: [{model}](https://huggingface.co/{model})
126
+ - **Number of Samples**: {num_samples:,}
127
+ - **Processing Time**: {processing_time}
128
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
129
+
130
+ ### Configuration
131
+
132
+ - **Image Column**: `{image_column}`
133
+ - **Output Column**: `markdown`
134
+ - **Dataset Split**: `{split}`
135
+ - **Batch Size**: {batch_size}
136
+ - **Max Model Length**: {max_model_len:,} tokens
137
+ - **Max Output Tokens**: {max_tokens:,}
138
+ - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
139
+
140
+ ## Model Information
141
+
142
+ Nanonets-OCR-s is a state-of-the-art document OCR model that excels at:
143
+ - 📐 **LaTeX equations** - Mathematical formulas preserved in LaTeX format
144
+ - 📊 **Tables** - Extracted and formatted as HTML
145
+ - 📝 **Document structure** - Headers, lists, and formatting maintained
146
+ - 🖼️ **Images** - Captions and descriptions included in `<img>` tags
147
+ - ☑️ **Forms** - Checkboxes rendered as ☐/☑
148
+ - 🔖 **Watermarks** - Wrapped in `<watermark>` tags
149
+ - 📄 **Page numbers** - Wrapped in `<page_number>` tags
150
+
151
+ ## Dataset Structure
152
+
153
+ The dataset contains all original columns plus:
154
+ - `markdown`: The extracted text in markdown format with preserved structure
155
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
156
+
157
+ ## Usage
158
+
159
+ ```python
160
+ from datasets import load_dataset
161
+ import json
162
+
163
+ # Load the dataset
164
+ dataset = load_dataset("{{output_dataset_id}}", split="{split}")
165
+
166
+ # Access the markdown text
167
+ for example in dataset:
168
+ print(example["markdown"])
169
+ break
170
+
171
+ # View all OCR models applied to this dataset
172
+ inference_info = json.loads(dataset[0]["inference_info"])
173
+ for info in inference_info:
174
+ print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")
175
+ ```
176
+
177
+ ## Reproduction
178
+
179
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) Nanonets OCR script:
180
+
181
+ ```bash
182
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \\
183
+ {source_dataset} \\
184
+ <output-dataset> \\
185
+ --image-column {image_column} \\
186
+ --batch-size {batch_size} \\
187
+ --max-model-len {max_model_len} \\
188
+ --max-tokens {max_tokens} \\
189
+ --gpu-memory-utilization {gpu_memory_utilization}
190
+ ```
191
+
192
+ ## Performance
193
+
194
+ - **Processing Speed**: ~{num_samples / (float(processing_time.split()[0]) * 60):.1f} images/second
195
+ - **GPU Configuration**: vLLM with {gpu_memory_utilization:.0%} GPU memory utilization
196
+
197
+ Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
198
+ """
199
+
200
+
201
+ def main(
202
+ input_dataset: str,
203
+ output_dataset: str,
204
+ image_column: str = "image",
205
+ batch_size: int = 32,
206
+ model: str = "nanonets/Nanonets-OCR-s",
207
+ max_model_len: int = 8192,
208
+ max_tokens: int = 4096,
209
+ gpu_memory_utilization: float = 0.8,
210
+ hf_token: str = None,
211
+ split: str = "train",
212
+ max_samples: int = None,
213
+ private: bool = False,
214
+ shuffle: bool = False,
215
+ seed: int = 42,
216
+ ):
217
+ """Process images from HF dataset through OCR model."""
218
+
219
+ # Check CUDA availability first
220
+ check_cuda_availability()
221
+
222
+ # Track processing start time
223
+ start_time = datetime.now()
224
+
225
+ # Enable HF_TRANSFER for faster downloads
226
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
227
+
228
+ # Login to HF if token provided
229
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
230
+ if HF_TOKEN:
231
+ login(token=HF_TOKEN)
232
+
233
+ # Load dataset
234
+ logger.info(f"Loading dataset: {input_dataset}")
235
+ dataset = load_dataset(input_dataset, split=split)
236
+
237
+ # Validate image column
238
+ if image_column not in dataset.column_names:
239
+ raise ValueError(
240
+ f"Column '{image_column}' not found. Available: {dataset.column_names}"
241
+ )
242
+
243
+ # Shuffle if requested
244
+ if shuffle:
245
+ logger.info(f"Shuffling dataset with seed {seed}")
246
+ dataset = dataset.shuffle(seed=seed)
247
+
248
+ # Limit samples if requested
249
+ if max_samples:
250
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
251
+ logger.info(f"Limited to {len(dataset)} samples")
252
+
253
+ # Initialize vLLM
254
+ logger.info(f"Initializing vLLM with model: {model}")
255
+ llm = LLM(
256
+ model=model,
257
+ trust_remote_code=True,
258
+ max_model_len=max_model_len,
259
+ gpu_memory_utilization=gpu_memory_utilization,
260
+ limit_mm_per_prompt={"image": 1},
261
+ )
262
+
263
+ sampling_params = SamplingParams(
264
+ temperature=0.0, # Deterministic for OCR
265
+ max_tokens=max_tokens,
266
+ )
267
+
268
+ # Process images in batches
269
+ all_markdown = []
270
+
271
+ logger.info(f"Processing {len(dataset)} images in batches of {batch_size}")
272
+
273
+ # Process in batches to avoid memory issues
274
+ for batch_indices in tqdm(
275
+ partition_all(batch_size, range(len(dataset))),
276
+ total=(len(dataset) + batch_size - 1) // batch_size,
277
+ desc="OCR processing",
278
+ ):
279
+ batch_indices = list(batch_indices)
280
+ batch_images = [dataset[i][image_column] for i in batch_indices]
281
+
282
+ try:
283
+ # Create messages for batch
284
+ batch_messages = [make_ocr_message(img) for img in batch_images]
285
+
286
+ # Process with vLLM
287
+ outputs = llm.chat(batch_messages, sampling_params)
288
+
289
+ # Extract markdown from outputs
290
+ for output in outputs:
291
+ markdown_text = output.outputs[0].text.strip()
292
+ all_markdown.append(markdown_text)
293
+
294
+ except Exception as e:
295
+ logger.error(f"Error processing batch: {e}")
296
+ # Add error placeholders for failed batch
297
+ all_markdown.extend(["[OCR FAILED]"] * len(batch_images))
298
+
299
+ # Add markdown column to dataset
300
+ logger.info("Adding markdown column to dataset")
301
+ dataset = dataset.add_column("markdown", all_markdown)
302
+
303
+ # Handle inference_info tracking
304
+ logger.info("Updating inference_info...")
305
+
306
+ # Check for existing inference_info
307
+ if "inference_info" in dataset.column_names:
308
+ # Parse existing info from first row (all rows have same info)
309
+ try:
310
+ existing_info = json.loads(dataset[0]["inference_info"])
311
+ if not isinstance(existing_info, list):
312
+ existing_info = [existing_info] # Convert old format to list
313
+ except (json.JSONDecodeError, TypeError):
314
+ existing_info = []
315
+ # Remove old column to update it
316
+ dataset = dataset.remove_columns(["inference_info"])
317
+ else:
318
+ existing_info = []
319
+
320
+ # Add new inference info
321
+ new_info = {
322
+ "column_name": "markdown",
323
+ "model_id": model,
324
+ "processing_date": datetime.now().isoformat(),
325
+ "batch_size": batch_size,
326
+ "max_tokens": max_tokens,
327
+ "gpu_memory_utilization": gpu_memory_utilization,
328
+ "max_model_len": max_model_len,
329
+ "script": "nanonets-ocr.py",
330
+ "script_version": "1.0.0",
331
+ "script_url": "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py"
332
+ }
333
+ existing_info.append(new_info)
334
+
335
+ # Add updated inference_info column
336
+ info_json = json.dumps(existing_info, ensure_ascii=False)
337
+ dataset = dataset.add_column("inference_info", [info_json] * len(dataset))
338
+
339
+ # Push to hub
340
+ logger.info(f"Pushing to {output_dataset}")
341
+ dataset.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
342
+
343
+ # Calculate processing time
344
+ end_time = datetime.now()
345
+ processing_duration = end_time - start_time
346
+ processing_time = f"{processing_duration.total_seconds() / 60:.1f} minutes"
347
+
348
+ # Create and push dataset card
349
+ logger.info("Creating dataset card...")
350
+ card_content = create_dataset_card(
351
+ source_dataset=input_dataset,
352
+ model=model,
353
+ num_samples=len(dataset),
354
+ processing_time=processing_time,
355
+ batch_size=batch_size,
356
+ max_model_len=max_model_len,
357
+ max_tokens=max_tokens,
358
+ gpu_memory_utilization=gpu_memory_utilization,
359
+ image_column=image_column,
360
+ split=split,
361
+ )
362
+
363
+ card = DatasetCard(card_content)
364
+ card.push_to_hub(output_dataset, token=HF_TOKEN)
365
+ logger.info("✅ Dataset card created and pushed!")
366
+
367
+ logger.info("✅ OCR conversion complete!")
368
+ logger.info(
369
+ f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
370
+ )
371
+
372
+
373
+ if __name__ == "__main__":
374
+ # Show example usage if no arguments
375
+ if len(sys.argv) == 1:
376
+ print("=" * 80)
377
+ print("Nanonets OCR to Markdown Converter")
378
+ print("=" * 80)
379
+ print("\nThis script converts document images to structured markdown using")
380
+ print("the Nanonets-OCR-s model with vLLM acceleration.")
381
+ print("\nFeatures:")
382
+ print("- LaTeX equation recognition")
383
+ print("- Table extraction and formatting")
384
+ print("- Document structure preservation")
385
+ print("- Signature and watermark detection")
386
+ print("\nExample usage:")
387
+ print("\n1. Basic OCR conversion:")
388
+ print(" uv run nanonets-ocr.py document-images markdown-docs")
389
+ print("\n2. With custom settings:")
390
+ print(" uv run nanonets-ocr.py scanned-pdfs extracted-text \\")
391
+ print(" --image-column page \\")
392
+ print(" --batch-size 16 \\")
393
+ print(" --gpu-memory-utilization 0.8")
394
+ print("\n3. Process a subset for testing:")
395
+ print(" uv run nanonets-ocr.py large-dataset test-output --max-samples 10")
396
+ print("\n4. Random sample from ordered dataset:")
397
+ print(" uv run nanonets-ocr.py ordered-dataset random-test --max-samples 50 --shuffle")
398
+ print("\n5. Running on HF Jobs:")
399
+ print(" hfjobs run \\")
400
+ print(" --flavor l4x1 \\")
401
+ print(" --secret HF_TOKEN=... \\")
402
+ print(
403
+ " uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \\"
404
+ )
405
+ print(" your-document-dataset \\")
406
+ print(" your-markdown-output")
407
+ print("\n" + "=" * 80)
408
+ print("\nFor full help, run: uv run nanonets-ocr.py --help")
409
+ sys.exit(0)
410
+
411
+ parser = argparse.ArgumentParser(
412
+ description="OCR images to markdown using Nanonets-OCR-s",
413
+ formatter_class=argparse.RawDescriptionHelpFormatter,
414
+ epilog="""
415
+ Examples:
416
+ # Basic usage
417
+ uv run nanonets-ocr.py my-images-dataset ocr-results
418
+
419
+ # With specific image column
420
+ uv run nanonets-ocr.py documents extracted-text --image-column scan
421
+
422
+ # Process subset for testing
423
+ uv run nanonets-ocr.py large-dataset test-output --max-samples 100
424
+
425
+ # Random sample from ordered dataset
426
+ uv run nanonets-ocr.py ordered-dataset random-sample --max-samples 50 --shuffle
427
+ """,
428
+ )
429
+
430
+ parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
431
+ parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
432
+ parser.add_argument(
433
+ "--image-column",
434
+ default="image",
435
+ help="Column containing images (default: image)",
436
+ )
437
+ parser.add_argument(
438
+ "--batch-size",
439
+ type=int,
440
+ default=32,
441
+ help="Batch size for processing (default: 32)",
442
+ )
443
+ parser.add_argument(
444
+ "--model",
445
+ default="nanonets/Nanonets-OCR-s",
446
+ help="Model to use (default: nanonets/Nanonets-OCR-s)",
447
+ )
448
+ parser.add_argument(
449
+ "--max-model-len",
450
+ type=int,
451
+ default=8192,
452
+ help="Maximum model context length (default: 8192)",
453
+ )
454
+ parser.add_argument(
455
+ "--max-tokens",
456
+ type=int,
457
+ default=4096,
458
+ help="Maximum tokens to generate (default: 4096)",
459
+ )
460
+ parser.add_argument(
461
+ "--gpu-memory-utilization",
462
+ type=float,
463
+ default=0.8,
464
+ help="GPU memory utilization (default: 0.8)",
465
+ )
466
+ parser.add_argument("--hf-token", help="Hugging Face API token")
467
+ parser.add_argument(
468
+ "--split", default="train", help="Dataset split to use (default: train)"
469
+ )
470
+ parser.add_argument(
471
+ "--max-samples",
472
+ type=int,
473
+ help="Maximum number of samples to process (for testing)",
474
+ )
475
+ parser.add_argument(
476
+ "--private", action="store_true", help="Make output dataset private"
477
+ )
478
+ parser.add_argument(
479
+ "--shuffle",
480
+ action="store_true",
481
+ help="Shuffle the dataset before processing (useful for random sampling)",
482
+ )
483
+ parser.add_argument(
484
+ "--seed",
485
+ type=int,
486
+ default=42,
487
+ help="Random seed for shuffling (default: 42)",
488
+ )
489
+
490
+ args = parser.parse_args()
491
+
492
+ main(
493
+ input_dataset=args.input_dataset,
494
+ output_dataset=args.output_dataset,
495
+ image_column=args.image_column,
496
+ batch_size=args.batch_size,
497
+ model=args.model,
498
+ max_model_len=args.max_model_len,
499
+ max_tokens=args.max_tokens,
500
+ gpu_memory_utilization=args.gpu_memory_utilization,
501
+ hf_token=args.hf_token,
502
+ split=args.split,
503
+ max_samples=args.max_samples,
504
+ private=args.private,
505
+ shuffle=args.shuffle,
506
+ seed=args.seed,
507
+ )
nanonets-ocr2.py ADDED
@@ -0,0 +1,514 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub[hf_transfer]",
6
+ # "pillow",
7
+ # "vllm",
8
+ # "tqdm",
9
+ # "toolz",
10
+ # "torch",
11
+ # ]
12
+ #
13
+ # ///
14
+
15
+ """
16
+ Convert document images to markdown using Nanonets-OCR2-3B with vLLM.
17
+
18
+ This script processes images through the Nanonets-OCR2-3B model (3.75B params)
19
+ to extract text and structure as markdown, ideal for document understanding tasks.
20
+
21
+ Features:
22
+ - LaTeX equation recognition
23
+ - Table extraction and formatting (HTML)
24
+ - Document structure preservation
25
+ - Image descriptions and captions
26
+ - Signature and watermark detection
27
+ - Checkbox recognition
28
+ - Multilingual support
29
+ """
30
+
31
+ import argparse
32
+ import base64
33
+ import io
34
+ import json
35
+ import logging
36
+ import os
37
+ import sys
38
+ from typing import Any, Dict, List, Union
39
+ from datetime import datetime
40
+
41
+ import torch
42
+ from datasets import load_dataset
43
+ from huggingface_hub import DatasetCard, login
44
+ from PIL import Image
45
+ from toolz import partition_all
46
+ from tqdm.auto import tqdm
47
+ from vllm import LLM, SamplingParams
48
+
49
+ logging.basicConfig(level=logging.INFO)
50
+ logger = logging.getLogger(__name__)
51
+
52
+
53
+ def check_cuda_availability():
54
+ """Check if CUDA is available and exit if not."""
55
+ if not torch.cuda.is_available():
56
+ logger.error("CUDA is not available. This script requires a GPU.")
57
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
58
+ sys.exit(1)
59
+ else:
60
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
61
+
62
+
63
+ def make_ocr_message(
64
+ image: Union[Image.Image, Dict[str, Any], str],
65
+ prompt: str = "Extract the text from the above document as if you were reading it naturally. Return the tables in html format. Return the equations in LaTeX representation. If there is an image in the document and image caption is not present, add a small description of the image inside the <img></img> tag; otherwise, add the image caption inside <img></img>. Watermarks should be wrapped in brackets. Ex: <watermark>OFFICIAL COPY</watermark>. Page numbers should be wrapped in brackets. Ex: <page_number>14</page_number> or <page_number>9/22</page_number>. Prefer using ☐ and ☑ for check boxes.",
66
+ ) -> List[Dict]:
67
+ """Create chat message for OCR processing."""
68
+ # Convert to PIL Image if needed
69
+ if isinstance(image, Image.Image):
70
+ pil_img = image
71
+ elif isinstance(image, dict) and "bytes" in image:
72
+ pil_img = Image.open(io.BytesIO(image["bytes"]))
73
+ elif isinstance(image, str):
74
+ pil_img = Image.open(image)
75
+ else:
76
+ raise ValueError(f"Unsupported image type: {type(image)}")
77
+
78
+ # Convert to base64 data URI
79
+ buf = io.BytesIO()
80
+ pil_img.save(buf, format="PNG")
81
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
82
+
83
+ # Return message in vLLM format
84
+ return [
85
+ {
86
+ "role": "user",
87
+ "content": [
88
+ {"type": "image_url", "image_url": {"url": data_uri}},
89
+ {"type": "text", "text": prompt},
90
+ ],
91
+ }
92
+ ]
93
+
94
+
95
+ def create_dataset_card(
96
+ source_dataset: str,
97
+ model: str,
98
+ num_samples: int,
99
+ processing_time: str,
100
+ batch_size: int,
101
+ max_model_len: int,
102
+ max_tokens: int,
103
+ gpu_memory_utilization: float,
104
+ image_column: str = "image",
105
+ split: str = "train",
106
+ ) -> str:
107
+ """Create a dataset card documenting the OCR process."""
108
+ model_name = model.split("/")[-1]
109
+
110
+ return f"""---
111
+ tags:
112
+ - ocr
113
+ - document-processing
114
+ - nanonets
115
+ - nanonets-ocr2
116
+ - markdown
117
+ - uv-script
118
+ - generated
119
+ ---
120
+
121
+ # Document OCR using {model_name}
122
+
123
+ This dataset contains markdown-formatted OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using Nanonets-OCR2-3B.
124
+
125
+ ## Processing Details
126
+
127
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
128
+ - **Model**: [{model}](https://huggingface.co/{model})
129
+ - **Model Size**: 3.75B parameters
130
+ - **Number of Samples**: {num_samples:,}
131
+ - **Processing Time**: {processing_time}
132
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
133
+
134
+ ### Configuration
135
+
136
+ - **Image Column**: `{image_column}`
137
+ - **Output Column**: `markdown`
138
+ - **Dataset Split**: `{split}`
139
+ - **Batch Size**: {batch_size}
140
+ - **Max Model Length**: {max_model_len:,} tokens
141
+ - **Max Output Tokens**: {max_tokens:,}
142
+ - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
143
+
144
+ ## Model Information
145
+
146
+ Nanonets-OCR2-3B is a state-of-the-art document OCR model that excels at:
147
+ - 📐 **LaTeX equations** - Mathematical formulas preserved in LaTeX format
148
+ - 📊 **Tables** - Extracted and formatted as HTML
149
+ - 📝 **Document structure** - Headers, lists, and formatting maintained
150
+ - 🖼️ **Images** - Captions and descriptions included in `<img>` tags
151
+ - ☑️ **Forms** - Checkboxes rendered as ☐/☑
152
+ - 🔖 **Watermarks** - Wrapped in `<watermark>` tags
153
+ - 📄 **Page numbers** - Wrapped in `<page_number>` tags
154
+ - 🌍 **Multilingual** - Supports multiple languages
155
+
156
+ ## Dataset Structure
157
+
158
+ The dataset contains all original columns plus:
159
+ - `markdown`: The extracted text in markdown format with preserved structure
160
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
161
+
162
+ ## Usage
163
+
164
+ ```python
165
+ from datasets import load_dataset
166
+ import json
167
+
168
+ # Load the dataset
169
+ dataset = load_dataset("{{{{output_dataset_id}}}}", split="{split}")
170
+
171
+ # Access the markdown text
172
+ for example in dataset:
173
+ print(example["markdown"])
174
+ break
175
+
176
+ # View all OCR models applied to this dataset
177
+ inference_info = json.loads(dataset[0]["inference_info"])
178
+ for info in inference_info:
179
+ print(f"Column: {{{{info['column_name']}}}} - Model: {{{{info['model_id']}}}}")
180
+ ```
181
+
182
+ ## Reproduction
183
+
184
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) Nanonets OCR2 script:
185
+
186
+ ```bash
187
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr2.py \\
188
+ {source_dataset} \\
189
+ <output-dataset> \\
190
+ --model {model} \\
191
+ --image-column {image_column} \\
192
+ --batch-size {batch_size} \\
193
+ --max-model-len {max_model_len} \\
194
+ --max-tokens {max_tokens} \\
195
+ --gpu-memory-utilization {gpu_memory_utilization}
196
+ ```
197
+
198
+ ## Performance
199
+
200
+ - **Processing Speed**: ~{num_samples / (float(processing_time.split()[0]) * 60):.1f} images/second
201
+ - **GPU Configuration**: vLLM with {gpu_memory_utilization:.0%} GPU memory utilization
202
+
203
+ Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
204
+ """
205
+
206
+
207
+ def main(
208
+ input_dataset: str,
209
+ output_dataset: str,
210
+ image_column: str = "image",
211
+ batch_size: int = 16,
212
+ model: str = "nanonets/Nanonets-OCR2-3B",
213
+ max_model_len: int = 8192,
214
+ max_tokens: int = 4096,
215
+ gpu_memory_utilization: float = 0.8,
216
+ hf_token: str = None,
217
+ split: str = "train",
218
+ max_samples: int = None,
219
+ private: bool = False,
220
+ shuffle: bool = False,
221
+ seed: int = 42,
222
+ ):
223
+ """Process images from HF dataset through Nanonets-OCR2-3B model."""
224
+
225
+ # Check CUDA availability first
226
+ check_cuda_availability()
227
+
228
+ # Track processing start time
229
+ start_time = datetime.now()
230
+
231
+ # Enable HF_TRANSFER for faster downloads
232
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
233
+
234
+ # Login to HF if token provided
235
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
236
+ if HF_TOKEN:
237
+ login(token=HF_TOKEN)
238
+
239
+ # Load dataset
240
+ logger.info(f"Loading dataset: {input_dataset}")
241
+ dataset = load_dataset(input_dataset, split=split)
242
+
243
+ # Validate image column
244
+ if image_column not in dataset.column_names:
245
+ raise ValueError(
246
+ f"Column '{image_column}' not found. Available: {dataset.column_names}"
247
+ )
248
+
249
+ # Shuffle if requested
250
+ if shuffle:
251
+ logger.info(f"Shuffling dataset with seed {seed}")
252
+ dataset = dataset.shuffle(seed=seed)
253
+
254
+ # Limit samples if requested
255
+ if max_samples:
256
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
257
+ logger.info(f"Limited to {len(dataset)} samples")
258
+
259
+ # Initialize vLLM
260
+ logger.info(f"Initializing vLLM with model: {model}")
261
+ llm = LLM(
262
+ model=model,
263
+ trust_remote_code=True,
264
+ max_model_len=max_model_len,
265
+ gpu_memory_utilization=gpu_memory_utilization,
266
+ limit_mm_per_prompt={"image": 1},
267
+ )
268
+
269
+ sampling_params = SamplingParams(
270
+ temperature=0.0, # Deterministic for OCR
271
+ max_tokens=max_tokens,
272
+ )
273
+
274
+ # Process images in batches
275
+ all_markdown = []
276
+
277
+ logger.info(f"Processing {len(dataset)} images in batches of {batch_size}")
278
+
279
+ # Process in batches to avoid memory issues
280
+ for batch_indices in tqdm(
281
+ partition_all(batch_size, range(len(dataset))),
282
+ total=(len(dataset) + batch_size - 1) // batch_size,
283
+ desc="OCR processing",
284
+ ):
285
+ batch_indices = list(batch_indices)
286
+ batch_images = [dataset[i][image_column] for i in batch_indices]
287
+
288
+ try:
289
+ # Create messages for batch
290
+ batch_messages = [make_ocr_message(img) for img in batch_images]
291
+
292
+ # Process with vLLM
293
+ outputs = llm.chat(batch_messages, sampling_params)
294
+
295
+ # Extract markdown from outputs
296
+ for output in outputs:
297
+ markdown_text = output.outputs[0].text.strip()
298
+ all_markdown.append(markdown_text)
299
+
300
+ except Exception as e:
301
+ logger.error(f"Error processing batch: {e}")
302
+ # Add error placeholders for failed batch
303
+ all_markdown.extend(["[OCR FAILED]"] * len(batch_images))
304
+
305
+ # Add markdown column to dataset
306
+ logger.info("Adding markdown column to dataset")
307
+ dataset = dataset.add_column("markdown", all_markdown)
308
+
309
+ # Handle inference_info tracking
310
+ logger.info("Updating inference_info...")
311
+
312
+ # Check for existing inference_info
313
+ if "inference_info" in dataset.column_names:
314
+ # Parse existing info from first row (all rows have same info)
315
+ try:
316
+ existing_info = json.loads(dataset[0]["inference_info"])
317
+ if not isinstance(existing_info, list):
318
+ existing_info = [existing_info] # Convert old format to list
319
+ except (json.JSONDecodeError, TypeError):
320
+ existing_info = []
321
+ # Remove old column to update it
322
+ dataset = dataset.remove_columns(["inference_info"])
323
+ else:
324
+ existing_info = []
325
+
326
+ # Add new inference info
327
+ new_info = {
328
+ "column_name": "markdown",
329
+ "model_id": model,
330
+ "processing_date": datetime.now().isoformat(),
331
+ "batch_size": batch_size,
332
+ "max_tokens": max_tokens,
333
+ "gpu_memory_utilization": gpu_memory_utilization,
334
+ "max_model_len": max_model_len,
335
+ "script": "nanonets-ocr2.py",
336
+ "script_version": "1.0.0",
337
+ "script_url": "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr2.py"
338
+ }
339
+ existing_info.append(new_info)
340
+
341
+ # Add updated inference_info column
342
+ info_json = json.dumps(existing_info, ensure_ascii=False)
343
+ dataset = dataset.add_column("inference_info", [info_json] * len(dataset))
344
+
345
+ # Push to hub
346
+ logger.info(f"Pushing to {output_dataset}")
347
+ dataset.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
348
+
349
+ # Calculate processing time
350
+ end_time = datetime.now()
351
+ processing_duration = end_time - start_time
352
+ processing_time = f"{processing_duration.total_seconds() / 60:.1f} minutes"
353
+
354
+ # Create and push dataset card
355
+ logger.info("Creating dataset card...")
356
+ card_content = create_dataset_card(
357
+ source_dataset=input_dataset,
358
+ model=model,
359
+ num_samples=len(dataset),
360
+ processing_time=processing_time,
361
+ batch_size=batch_size,
362
+ max_model_len=max_model_len,
363
+ max_tokens=max_tokens,
364
+ gpu_memory_utilization=gpu_memory_utilization,
365
+ image_column=image_column,
366
+ split=split,
367
+ )
368
+
369
+ card = DatasetCard(card_content)
370
+ card.push_to_hub(output_dataset, token=HF_TOKEN)
371
+ logger.info("✅ Dataset card created and pushed!")
372
+
373
+ logger.info("✅ OCR conversion complete!")
374
+ logger.info(
375
+ f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
376
+ )
377
+
378
+
379
+ if __name__ == "__main__":
380
+ # Show example usage if no arguments
381
+ if len(sys.argv) == 1:
382
+ print("=" * 80)
383
+ print("Nanonets OCR2-3B to Markdown Converter")
384
+ print("=" * 80)
385
+ print("\nThis script converts document images to structured markdown using")
386
+ print("the Nanonets-OCR2-3B model (3.75B params) with vLLM acceleration.")
387
+ print("\nFeatures:")
388
+ print("- LaTeX equation recognition")
389
+ print("- Table extraction and formatting (HTML)")
390
+ print("- Document structure preservation")
391
+ print("- Image descriptions and captions")
392
+ print("- Signature and watermark detection")
393
+ print("- Checkbox recognition (☐/☑)")
394
+ print("- Multilingual support")
395
+ print("\nExample usage:")
396
+ print("\n1. Basic OCR conversion:")
397
+ print(" uv run nanonets-ocr2.py document-images markdown-docs")
398
+ print("\n2. With custom settings:")
399
+ print(" uv run nanonets-ocr2.py scanned-pdfs extracted-text \\")
400
+ print(" --image-column page \\")
401
+ print(" --batch-size 32 \\")
402
+ print(" --gpu-memory-utilization 0.8")
403
+ print("\n3. Process a subset for testing:")
404
+ print(" uv run nanonets-ocr2.py large-dataset test-output --max-samples 10")
405
+ print("\n4. Random sample from ordered dataset:")
406
+ print(" uv run nanonets-ocr2.py ordered-dataset random-test \\")
407
+ print(" --max-samples 50 --shuffle")
408
+ print("\n5. Running on HF Jobs:")
409
+ print(" hf jobs uv run --flavor l4x1 \\")
410
+ print(" -e HF_TOKEN=$(python3 -c \"from huggingface_hub import get_token; print(get_token())\") \\")
411
+ print(" https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr2.py \\")
412
+ print(" your-document-dataset \\")
413
+ print(" your-markdown-output")
414
+ print("\n" + "=" * 80)
415
+ print("\nFor full help, run: uv run nanonets-ocr2.py --help")
416
+ sys.exit(0)
417
+
418
+ parser = argparse.ArgumentParser(
419
+ description="OCR images to markdown using Nanonets-OCR2-3B",
420
+ formatter_class=argparse.RawDescriptionHelpFormatter,
421
+ epilog="""
422
+ Examples:
423
+ # Basic usage
424
+ uv run nanonets-ocr2.py my-images-dataset ocr-results
425
+
426
+ # With specific image column
427
+ uv run nanonets-ocr2.py documents extracted-text --image-column scan
428
+
429
+ # Process subset for testing
430
+ uv run nanonets-ocr2.py large-dataset test-output --max-samples 100
431
+
432
+ # Random sample from ordered dataset
433
+ uv run nanonets-ocr2.py ordered-dataset random-sample --max-samples 50 --shuffle
434
+ """,
435
+ )
436
+
437
+ parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
438
+ parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
439
+ parser.add_argument(
440
+ "--image-column",
441
+ default="image",
442
+ help="Column containing images (default: image)",
443
+ )
444
+ parser.add_argument(
445
+ "--batch-size",
446
+ type=int,
447
+ default=16,
448
+ help="Batch size for processing (default: 16)",
449
+ )
450
+ parser.add_argument(
451
+ "--model",
452
+ default="nanonets/Nanonets-OCR2-3B",
453
+ help="Model to use (default: nanonets/Nanonets-OCR2-3B)",
454
+ )
455
+ parser.add_argument(
456
+ "--max-model-len",
457
+ type=int,
458
+ default=8192,
459
+ help="Maximum model context length (default: 8192)",
460
+ )
461
+ parser.add_argument(
462
+ "--max-tokens",
463
+ type=int,
464
+ default=4096,
465
+ help="Maximum tokens to generate (default: 4096)",
466
+ )
467
+ parser.add_argument(
468
+ "--gpu-memory-utilization",
469
+ type=float,
470
+ default=0.8,
471
+ help="GPU memory utilization (default: 0.8)",
472
+ )
473
+ parser.add_argument("--hf-token", help="Hugging Face API token")
474
+ parser.add_argument(
475
+ "--split", default="train", help="Dataset split to use (default: train)"
476
+ )
477
+ parser.add_argument(
478
+ "--max-samples",
479
+ type=int,
480
+ help="Maximum number of samples to process (for testing)",
481
+ )
482
+ parser.add_argument(
483
+ "--private", action="store_true", help="Make output dataset private"
484
+ )
485
+ parser.add_argument(
486
+ "--shuffle",
487
+ action="store_true",
488
+ help="Shuffle the dataset before processing (useful for random sampling)",
489
+ )
490
+ parser.add_argument(
491
+ "--seed",
492
+ type=int,
493
+ default=42,
494
+ help="Random seed for shuffling (default: 42)",
495
+ )
496
+
497
+ args = parser.parse_args()
498
+
499
+ main(
500
+ input_dataset=args.input_dataset,
501
+ output_dataset=args.output_dataset,
502
+ image_column=args.image_column,
503
+ batch_size=args.batch_size,
504
+ model=args.model,
505
+ max_model_len=args.max_model_len,
506
+ max_tokens=args.max_tokens,
507
+ gpu_memory_utilization=args.gpu_memory_utilization,
508
+ hf_token=args.hf_token,
509
+ split=args.split,
510
+ max_samples=args.max_samples,
511
+ private=args.private,
512
+ shuffle=args.shuffle,
513
+ seed=args.seed,
514
+ )
numarkdown-ocr.py ADDED
@@ -0,0 +1,683 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub[hf_transfer]",
6
+ # "pillow",
7
+ # "vllm",
8
+ # "tqdm",
9
+ # "toolz",
10
+ # "torch", # Added for CUDA check
11
+ # ]
12
+ #
13
+ # ///
14
+
15
+ """
16
+ Convert document images to markdown using NuMarkdown-8B-Thinking with vLLM.
17
+
18
+ This script processes images through the NuMarkdown model to extract
19
+ text with advanced reasoning capabilities, ideal for complex document understanding.
20
+
21
+ Features:
22
+ - Reasoning-based document analysis with thinking tokens
23
+ - Superior table extraction and formatting
24
+ - Complex layout understanding
25
+ - Mathematical formula recognition
26
+ - Clean markdown output generation
27
+ - Optional thinking trace inclusion
28
+ - Multi-GPU support with automatic detection
29
+ - Optimized token budget for reasoning models
30
+ """
31
+
32
+ import argparse
33
+ import base64
34
+ import io
35
+ import json
36
+ import logging
37
+ import os
38
+ import re
39
+ import sys
40
+ from typing import Any, Dict, List, Union, Optional, Tuple
41
+ from datetime import datetime
42
+
43
+ import torch
44
+ from torch import cuda
45
+ from datasets import load_dataset
46
+ from huggingface_hub import DatasetCard, HfApi, login
47
+ from PIL import Image
48
+ from toolz import partition_all
49
+ from tqdm.auto import tqdm
50
+ from vllm import LLM, SamplingParams
51
+
52
+ logging.basicConfig(level=logging.INFO)
53
+ logger = logging.getLogger(__name__)
54
+
55
+
56
+ def check_gpu_availability() -> int:
57
+ """Check if CUDA is available and return the number of GPUs."""
58
+ if not cuda.is_available():
59
+ logger.error("CUDA is not available. This script requires a GPU.")
60
+ logger.error("Please run on a machine with NVIDIA GPU or use HF Jobs with GPU flavor.")
61
+ sys.exit(1)
62
+
63
+ num_gpus = cuda.device_count()
64
+ for i in range(num_gpus):
65
+ gpu_name = cuda.get_device_name(i)
66
+ gpu_memory = cuda.get_device_properties(i).total_memory / 1024**3
67
+ logger.info(f"GPU {i}: {gpu_name} with {gpu_memory:.1f} GB memory")
68
+
69
+ return num_gpus
70
+
71
+
72
+ def validate_and_resize_image(
73
+ image: Image.Image,
74
+ min_pixels: int = 100 * 28 * 28,
75
+ max_pixels: int = 5000 * 28 * 28,
76
+ ) -> Image.Image:
77
+ """Validate and resize image to meet pixel constraints if necessary."""
78
+ width, height = image.size
79
+ total_pixels = width * height
80
+
81
+ if total_pixels < min_pixels or total_pixels > max_pixels:
82
+ # Calculate scaling factor
83
+ if total_pixels < min_pixels:
84
+ scale = (min_pixels / total_pixels) ** 0.5
85
+ else:
86
+ scale = (max_pixels / total_pixels) ** 0.5
87
+
88
+ new_width = int(width * scale)
89
+ new_height = int(height * scale)
90
+
91
+ logger.debug(f"Resizing image from {width}x{height} to {new_width}x{new_height}")
92
+ image = image.resize((new_width, new_height), Image.Resampling.LANCZOS)
93
+
94
+ return image
95
+
96
+
97
+ def extract_answer_from_thinking(text: str, include_thinking: bool = False) -> str:
98
+ """
99
+ Extract the final answer from NuMarkdown's thinking output.
100
+
101
+ The model generates output in format:
102
+ <think>reasoning process...</think>
103
+ <answer>final markdown output</answer>
104
+ """
105
+ if include_thinking:
106
+ # Return the full output including thinking traces
107
+ return text.strip()
108
+
109
+ # Extract content between <answer> tags
110
+ answer_pattern = r'<answer>(.*?)</answer>'
111
+ answer_match = re.search(answer_pattern, text, re.DOTALL)
112
+
113
+ if answer_match:
114
+ return answer_match.group(1).strip()
115
+
116
+ # If no answer tags found, check if the entire text is markdown
117
+ # (sometimes the model might not use tags)
118
+ if not '<think>' in text and not '<answer>' in text:
119
+ return text.strip()
120
+
121
+ # Fallback: return everything after </think> if present
122
+ think_end = text.find('</think>')
123
+ if think_end != -1:
124
+ remaining = text[think_end + 8:].strip()
125
+ # Remove <answer> tags if present
126
+ remaining = remaining.replace('<answer>', '').replace('</answer>', '').strip()
127
+ return remaining
128
+
129
+ # Last resort: return the full text
130
+ logger.warning("Could not extract answer from thinking tokens, returning full text")
131
+ return text.strip()
132
+
133
+
134
+ def make_numarkdown_message(
135
+ image: Union[Image.Image, Dict[str, Any], str],
136
+ prompt: str = "Convert this document to markdown. Focus on preserving structure, tables, formulas, and all textual content.",
137
+ ) -> List[Dict]:
138
+ """Create chat message for NuMarkdown processing."""
139
+ # Convert to PIL Image if needed
140
+ if isinstance(image, Image.Image):
141
+ pil_img = image.convert("RGB")
142
+ elif isinstance(image, dict) and "bytes" in image:
143
+ pil_img = Image.open(io.BytesIO(image["bytes"])).convert("RGB")
144
+ elif isinstance(image, str):
145
+ pil_img = Image.open(image).convert("RGB")
146
+ else:
147
+ raise ValueError(f"Unsupported image type: {type(image)}")
148
+
149
+ # Validate and resize if necessary
150
+ pil_img = validate_and_resize_image(pil_img)
151
+
152
+ # Convert to base64 data URI
153
+ buf = io.BytesIO()
154
+ pil_img.save(buf, format="PNG")
155
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
156
+
157
+ # Return message in vLLM chat format
158
+ return [
159
+ {
160
+ "role": "user",
161
+ "content": [
162
+ {"type": "image_url", "image_url": {"url": data_uri}},
163
+ {"type": "text", "text": prompt},
164
+ ],
165
+ }
166
+ ]
167
+
168
+
169
+ def create_dataset_card(
170
+ source_dataset: str,
171
+ model: str,
172
+ num_samples: int,
173
+ processing_time: str,
174
+ batch_size: int,
175
+ max_model_len: int,
176
+ max_tokens: int,
177
+ gpu_memory_utilization: float,
178
+ include_thinking: bool,
179
+ tensor_parallel_size: int,
180
+ image_column: str = "image",
181
+ split: str = "train",
182
+ ) -> str:
183
+ """Create a dataset card documenting the OCR process."""
184
+ model_name = model.split("/")[-1]
185
+
186
+ return f"""---
187
+ tags:
188
+ - ocr
189
+ - document-processing
190
+ - numarkdown
191
+ - markdown
192
+ - reasoning
193
+ - thinking-tokens
194
+ - uv-script
195
+ - generated
196
+ ---
197
+
198
+ # Document OCR using {model_name}
199
+
200
+ This dataset contains markdown-formatted OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using NuMarkdown-8B-Thinking.
201
+
202
+ ## Processing Details
203
+
204
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
205
+ - **Model**: [{model}](https://huggingface.co/{model})
206
+ - **Number of Samples**: {num_samples:,}
207
+ - **Processing Time**: {processing_time}
208
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
209
+
210
+ ### Configuration
211
+
212
+ - **Image Column**: `{image_column}`
213
+ - **Output Column**: `markdown`
214
+ - **Dataset Split**: `{split}`
215
+ - **Batch Size**: {batch_size}
216
+ - **Max Model Length**: {max_model_len:,} tokens
217
+ - **Max Output Tokens**: {max_tokens:,}
218
+ - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
219
+ - **Tensor Parallel Size**: {tensor_parallel_size} GPU(s)
220
+ - **Thinking Traces**: {"Included" if include_thinking else "Excluded (only final answers)"}
221
+
222
+ ## Model Information
223
+
224
+ NuMarkdown-8B-Thinking is a state-of-the-art reasoning-based document OCR model that excels at:
225
+ - 🧠 **Reasoning Process** - Analyzes document layout before generation
226
+ - 📊 **Complex Tables** - Superior table extraction and formatting
227
+ - 📐 **Mathematical Formulas** - Accurate LaTeX/math notation preservation
228
+ - 📝 **Document Structure** - Maintains hierarchical document organization
229
+ - 🔍 **Layout Analysis** - Understands complex multi-column layouts
230
+ - ✨ **Clean Output** - Generates well-formatted markdown
231
+
232
+ ### Thinking Tokens
233
+
234
+ This model uses a unique "thinking" process where it:
235
+ 1. Analyzes the document structure internally (`<think>` phase)
236
+ 2. Generates the final markdown output (`<answer>` phase)
237
+
238
+ {"The dataset includes both thinking traces and final answers." if include_thinking else "Only the final answers are included (thinking traces removed)."}
239
+
240
+ ## Dataset Structure
241
+
242
+ The dataset contains all original columns plus:
243
+ - `markdown`: The extracted text in markdown format
244
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
245
+
246
+ ## Usage
247
+
248
+ ```python
249
+ from datasets import load_dataset
250
+ import json
251
+
252
+ # Load the dataset
253
+ dataset = load_dataset("{{output_dataset_id}}", split="{split}")
254
+
255
+ # Access the markdown text
256
+ for example in dataset:
257
+ print(example["markdown"])
258
+ break
259
+
260
+ # View all OCR models applied to this dataset
261
+ inference_info = json.loads(dataset[0]["inference_info"])
262
+ for info in inference_info:
263
+ print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")
264
+ ```
265
+
266
+ ## Reproduction
267
+
268
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) NuMarkdown OCR script:
269
+
270
+ ```bash
271
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/numarkdown-ocr.py \\
272
+ {source_dataset} \\
273
+ <output-dataset> \\
274
+ --image-column {image_column} \\
275
+ --batch-size {batch_size} \\
276
+ --max-model-len {max_model_len} \\
277
+ --max-tokens {max_tokens} \\
278
+ --gpu-memory-utilization {gpu_memory_utilization} \\
279
+ {"--include-thinking" if include_thinking else ""}
280
+ ```
281
+
282
+ ## Performance
283
+
284
+ - **Processing Speed**: ~{num_samples / (float(processing_time.split()[0]) * 60):.1f} images/second
285
+ - **GPU Configuration**: {tensor_parallel_size} GPU(s) with {gpu_memory_utilization:.0%} memory utilization
286
+ - **Model Size**: 8.29B parameters
287
+
288
+ Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
289
+ """
290
+
291
+
292
+ def main(
293
+ input_dataset: str,
294
+ output_dataset: str,
295
+ image_column: str = "image",
296
+ batch_size: int = 16,
297
+ model: str = "numind/NuMarkdown-8B-Thinking",
298
+ max_model_len: int = 16384,
299
+ max_tokens: int = 16384,
300
+ gpu_memory_utilization: float = 0.9,
301
+ tensor_parallel_size: Optional[int] = None,
302
+ hf_token: str = None,
303
+ split: str = "train",
304
+ max_samples: int = None,
305
+ private: bool = False,
306
+ shuffle: bool = False,
307
+ seed: int = 42,
308
+ include_thinking: bool = False,
309
+ temperature: float = 0.0,
310
+ custom_prompt: Optional[str] = None,
311
+ ):
312
+ """Process images from HF dataset through NuMarkdown model.
313
+
314
+ The max_tokens parameter controls the total token budget for both
315
+ thinking and answer phases. For complex documents with extensive
316
+ reasoning, the default of 16384 tokens provides ample room for both
317
+ the thinking process and the final markdown output.
318
+ """
319
+
320
+ # GPU check and configuration
321
+ num_gpus = check_gpu_availability()
322
+ if tensor_parallel_size is None:
323
+ tensor_parallel_size = num_gpus
324
+ logger.info(
325
+ f"Auto-detected {num_gpus} GPU(s), using tensor_parallel_size={tensor_parallel_size}"
326
+ )
327
+ else:
328
+ logger.info(f"Using specified tensor_parallel_size={tensor_parallel_size}")
329
+ if tensor_parallel_size > num_gpus:
330
+ logger.warning(
331
+ f"Requested {tensor_parallel_size} GPUs but only {num_gpus} available"
332
+ )
333
+
334
+ # Track processing start time
335
+ start_time = datetime.now()
336
+
337
+ # Enable HF_TRANSFER for faster downloads
338
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
339
+
340
+ # Login to HF if token provided
341
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
342
+ if HF_TOKEN:
343
+ login(token=HF_TOKEN)
344
+
345
+ # Load dataset
346
+ logger.info(f"Loading dataset: {input_dataset}")
347
+ dataset = load_dataset(input_dataset, split=split)
348
+
349
+ # Validate image column
350
+ if image_column not in dataset.column_names:
351
+ raise ValueError(
352
+ f"Column '{image_column}' not found. Available: {dataset.column_names}"
353
+ )
354
+
355
+ # Shuffle if requested
356
+ if shuffle:
357
+ logger.info(f"Shuffling dataset with seed {seed}")
358
+ dataset = dataset.shuffle(seed=seed)
359
+
360
+ # Limit samples if requested
361
+ if max_samples:
362
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
363
+ logger.info(f"Limited to {len(dataset)} samples")
364
+
365
+ # Initialize vLLM with trust_remote_code for NuMarkdown
366
+ logger.info(f"Initializing vLLM with model: {model}")
367
+ logger.info(f"Using {tensor_parallel_size} GPU(s) for inference")
368
+ llm = LLM(
369
+ model=model,
370
+ trust_remote_code=True, # Required for NuMarkdown
371
+ max_model_len=max_model_len,
372
+ gpu_memory_utilization=gpu_memory_utilization,
373
+ tensor_parallel_size=tensor_parallel_size,
374
+ limit_mm_per_prompt={"image": 1},
375
+ )
376
+
377
+ # Set up sampling parameters
378
+ sampling_params = SamplingParams(
379
+ temperature=temperature,
380
+ max_tokens=max_tokens,
381
+ )
382
+
383
+ # Use custom prompt if provided, otherwise use default
384
+ prompt = custom_prompt or "Convert this document to markdown. Focus on preserving structure, tables, formulas, and all textual content."
385
+
386
+ # Process images in batches
387
+ all_markdown = []
388
+
389
+ logger.info(f"Processing {len(dataset)} images in batches of {batch_size}")
390
+ logger.info(f"Including thinking traces: {include_thinking}")
391
+
392
+ # Process in batches to avoid memory issues
393
+ for batch_indices in tqdm(
394
+ partition_all(batch_size, range(len(dataset))),
395
+ total=(len(dataset) + batch_size - 1) // batch_size,
396
+ desc="OCR processing",
397
+ ):
398
+ batch_indices = list(batch_indices)
399
+ batch_images = [dataset[i][image_column] for i in batch_indices]
400
+
401
+ try:
402
+ # Create messages for batch
403
+ batch_messages = [
404
+ make_numarkdown_message(img, prompt) for img in batch_images
405
+ ]
406
+
407
+ # Process with vLLM
408
+ outputs = llm.chat(batch_messages, sampling_params)
409
+
410
+ # Extract markdown from outputs
411
+ for output in outputs:
412
+ raw_text = output.outputs[0].text.strip()
413
+ # Extract answer from thinking tokens
414
+ markdown_text = extract_answer_from_thinking(raw_text, include_thinking)
415
+ all_markdown.append(markdown_text)
416
+
417
+ except Exception as e:
418
+ logger.error(f"Error processing batch: {e}")
419
+ # Add error placeholders for failed batch
420
+ all_markdown.extend(["[OCR FAILED]"] * len(batch_images))
421
+
422
+ # Add markdown column to dataset
423
+ logger.info("Adding markdown column to dataset")
424
+ dataset = dataset.add_column("markdown", all_markdown)
425
+
426
+ # Handle inference_info tracking
427
+ logger.info("Updating inference_info...")
428
+
429
+ # Check for existing inference_info
430
+ if "inference_info" in dataset.column_names:
431
+ # Parse existing info from first row (all rows have same info)
432
+ try:
433
+ existing_info = json.loads(dataset[0]["inference_info"])
434
+ if not isinstance(existing_info, list):
435
+ existing_info = [existing_info] # Convert old format to list
436
+ except (json.JSONDecodeError, TypeError):
437
+ existing_info = []
438
+ # Remove old column to update it
439
+ dataset = dataset.remove_columns(["inference_info"])
440
+ else:
441
+ existing_info = []
442
+
443
+ # Add new inference info
444
+ new_info = {
445
+ "column_name": "markdown",
446
+ "model_id": model,
447
+ "processing_date": datetime.now().isoformat(),
448
+ "batch_size": batch_size,
449
+ "max_tokens": max_tokens,
450
+ "gpu_memory_utilization": gpu_memory_utilization,
451
+ "max_model_len": max_model_len,
452
+ "include_thinking": include_thinking,
453
+ "temperature": temperature,
454
+ "prompt": prompt,
455
+ "script": "numarkdown-ocr.py",
456
+ "script_version": "1.0.0",
457
+ "script_url": "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/numarkdown-ocr.py"
458
+ }
459
+ existing_info.append(new_info)
460
+
461
+ # Add updated inference_info column
462
+ info_json = json.dumps(existing_info, ensure_ascii=False)
463
+ dataset = dataset.add_column("inference_info", [info_json] * len(dataset))
464
+
465
+ # Push to hub
466
+ logger.info(f"Pushing to {output_dataset}")
467
+ dataset.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
468
+
469
+ # Calculate processing time
470
+ end_time = datetime.now()
471
+ processing_duration = end_time - start_time
472
+ processing_time = f"{processing_duration.total_seconds() / 60:.1f} minutes"
473
+
474
+ # Create and push dataset card
475
+ logger.info("Creating dataset card...")
476
+ card_content = create_dataset_card(
477
+ source_dataset=input_dataset,
478
+ model=model,
479
+ num_samples=len(dataset),
480
+ processing_time=processing_time,
481
+ batch_size=batch_size,
482
+ max_model_len=max_model_len,
483
+ max_tokens=max_tokens,
484
+ gpu_memory_utilization=gpu_memory_utilization,
485
+ include_thinking=include_thinking,
486
+ tensor_parallel_size=tensor_parallel_size,
487
+ image_column=image_column,
488
+ split=split,
489
+ )
490
+
491
+ # Handle dataset card push with proper repo_id
492
+ full_repo_id = output_dataset
493
+ try:
494
+ card = DatasetCard(card_content)
495
+ # If output_dataset doesn't contain a username, get the current user's name
496
+ if "/" not in output_dataset:
497
+ api = HfApi(token=HF_TOKEN)
498
+ user_info = api.whoami()
499
+ full_repo_id = f"{user_info['name']}/{output_dataset}"
500
+ logger.info(f"Using full repo ID: {full_repo_id}")
501
+
502
+ card.push_to_hub(full_repo_id, token=HF_TOKEN)
503
+ logger.info("✅ Dataset card created and pushed!")
504
+ except Exception as e:
505
+ logger.warning(f"Could not push dataset card: {e}")
506
+ logger.info("Dataset was successfully created but card upload failed. You can add it manually.")
507
+
508
+ logger.info("✅ OCR conversion complete!")
509
+ logger.info(
510
+ f"Dataset available at: https://huggingface.co/datasets/{full_repo_id}"
511
+ )
512
+
513
+
514
+ if __name__ == "__main__":
515
+ # Show example usage if no arguments
516
+ if len(sys.argv) == 1:
517
+ print("=" * 80)
518
+ print("NuMarkdown-8B-Thinking OCR with Reasoning")
519
+ print("=" * 80)
520
+ print("\nThis script converts document images to markdown using")
521
+ print("the NuMarkdown-8B-Thinking model with advanced reasoning capabilities.")
522
+ print("\nFeatures:")
523
+ print("- 🧠 Reasoning-based document analysis")
524
+ print("- 📊 Superior table extraction and formatting")
525
+ print("- 📐 Mathematical formula recognition")
526
+ print("- 📝 Complex layout understanding")
527
+ print("- ✨ Clean markdown generation")
528
+ print("- 🔍 Optional thinking trace inclusion")
529
+ print("\nExample usage:")
530
+ print("\n1. Basic OCR conversion:")
531
+ print(" uv run numarkdown-ocr.py document-images markdown-docs")
532
+ print("\n2. Include thinking traces:")
533
+ print(" uv run numarkdown-ocr.py complex-docs analyzed-docs --include-thinking")
534
+ print("\n3. With custom settings:")
535
+ print(" uv run numarkdown-ocr.py scientific-papers extracted-text \\")
536
+ print(" --batch-size 8 \\")
537
+ print(" --max-tokens 16384 \\")
538
+ print(" --gpu-memory-utilization 0.9")
539
+ print("\n4. Process a subset for testing:")
540
+ print(" uv run numarkdown-ocr.py large-dataset test-output --max-samples 10")
541
+ print("\n5. Custom prompt for specific needs:")
542
+ print(" uv run numarkdown-ocr.py invoices invoice-data \\")
543
+ print(' --custom-prompt "Extract all invoice details including line items"')
544
+ print("\n6. Multi-GPU processing:")
545
+ print(" uv run numarkdown-ocr.py large-docs processed-docs --tensor-parallel-size 2")
546
+ print("\n7. Running on HF Jobs:")
547
+ print(" hf jobs uv run --flavor a100x2 \\")
548
+ print(' -e HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \\')
549
+ print(" https://huggingface.co/datasets/uv-scripts/ocr/raw/main/numarkdown-ocr.py \\")
550
+ print(" your-document-dataset \\")
551
+ print(" your-markdown-output")
552
+ print("\n" + "=" * 80)
553
+ print("\nFor full help, run: uv run numarkdown-ocr.py --help")
554
+ sys.exit(0)
555
+
556
+ parser = argparse.ArgumentParser(
557
+ description="OCR images to markdown using NuMarkdown-8B-Thinking with reasoning",
558
+ formatter_class=argparse.RawDescriptionHelpFormatter,
559
+ epilog="""
560
+ Examples:
561
+ # Basic usage
562
+ uv run numarkdown-ocr.py my-images-dataset ocr-results
563
+
564
+ # Include thinking traces in output
565
+ uv run numarkdown-ocr.py documents analyzed-docs --include-thinking
566
+
567
+ # Process subset for testing
568
+ uv run numarkdown-ocr.py large-dataset test-output --max-samples 100
569
+
570
+ # Custom prompt for specific extraction
571
+ uv run numarkdown-ocr.py forms form-data --custom-prompt "Extract all form fields and values"
572
+
573
+ # Multi-GPU for large datasets
574
+ uv run numarkdown-ocr.py large-dataset processed --tensor-parallel-size 4
575
+
576
+ # Random sample from dataset
577
+ uv run numarkdown-ocr.py ordered-dataset random-sample --max-samples 50 --shuffle
578
+ """,
579
+ )
580
+
581
+ parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
582
+ parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
583
+ parser.add_argument(
584
+ "--image-column",
585
+ default="image",
586
+ help="Column containing images (default: image)",
587
+ )
588
+ parser.add_argument(
589
+ "--batch-size",
590
+ type=int,
591
+ default=16,
592
+ help="Batch size for processing (default: 16, lower than others due to model size)",
593
+ )
594
+ parser.add_argument(
595
+ "--model",
596
+ default="numind/NuMarkdown-8B-Thinking",
597
+ help="Model to use (default: numind/NuMarkdown-8B-Thinking)",
598
+ )
599
+ parser.add_argument(
600
+ "--max-model-len",
601
+ type=int,
602
+ default=16384,
603
+ help="Maximum model context length (default: 16384)",
604
+ )
605
+ parser.add_argument(
606
+ "--max-tokens",
607
+ type=int,
608
+ default=16384,
609
+ help="Maximum tokens to generate including thinking tokens (default: 16384)",
610
+ )
611
+ parser.add_argument(
612
+ "--gpu-memory-utilization",
613
+ type=float,
614
+ default=0.9,
615
+ help="GPU memory utilization per GPU (default: 0.9)",
616
+ )
617
+ parser.add_argument(
618
+ "--tensor-parallel-size",
619
+ type=int,
620
+ help="Number of GPUs to use (default: auto-detect all available)",
621
+ )
622
+ parser.add_argument("--hf-token", help="Hugging Face API token")
623
+ parser.add_argument(
624
+ "--split", default="train", help="Dataset split to use (default: train)"
625
+ )
626
+ parser.add_argument(
627
+ "--max-samples",
628
+ type=int,
629
+ help="Maximum number of samples to process (for testing)",
630
+ )
631
+ parser.add_argument(
632
+ "--private", action="store_true", help="Make output dataset private"
633
+ )
634
+ parser.add_argument(
635
+ "--shuffle",
636
+ action="store_true",
637
+ help="Shuffle the dataset before processing (useful for random sampling)",
638
+ )
639
+ parser.add_argument(
640
+ "--seed",
641
+ type=int,
642
+ default=42,
643
+ help="Random seed for shuffling (default: 42)",
644
+ )
645
+ parser.add_argument(
646
+ "--include-thinking",
647
+ action="store_true",
648
+ help="Include thinking traces in output (default: only final answers)",
649
+ )
650
+ parser.add_argument(
651
+ "--temperature",
652
+ type=float,
653
+ default=0.0,
654
+ help="Temperature for generation (default: 0.0 for deterministic)",
655
+ )
656
+ parser.add_argument(
657
+ "--custom-prompt",
658
+ type=str,
659
+ help="Custom prompt for the model (overrides default)",
660
+ )
661
+
662
+ args = parser.parse_args()
663
+
664
+ main(
665
+ input_dataset=args.input_dataset,
666
+ output_dataset=args.output_dataset,
667
+ image_column=args.image_column,
668
+ batch_size=args.batch_size,
669
+ model=args.model,
670
+ max_model_len=args.max_model_len,
671
+ max_tokens=args.max_tokens,
672
+ gpu_memory_utilization=args.gpu_memory_utilization,
673
+ tensor_parallel_size=args.tensor_parallel_size,
674
+ hf_token=args.hf_token,
675
+ split=args.split,
676
+ max_samples=args.max_samples,
677
+ private=args.private,
678
+ shuffle=args.shuffle,
679
+ seed=args.seed,
680
+ include_thinking=args.include_thinking,
681
+ temperature=args.temperature,
682
+ custom_prompt=args.custom_prompt,
683
+ )
olmocr2-vllm.py ADDED
@@ -0,0 +1,636 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub[hf_transfer]",
6
+ # "pillow",
7
+ # "vllm",
8
+ # "tqdm",
9
+ # "toolz",
10
+ # "torch",
11
+ # "pyyaml", # For parsing YAML front matter
12
+ # ]
13
+ #
14
+ # ///
15
+
16
+ """
17
+ Convert document images to markdown using olmOCR-2 with vLLM.
18
+
19
+ This script processes images through the olmOCR-2-7B model to extract
20
+ text and structure as markdown, optimized for document understanding.
21
+
22
+ Features:
23
+ - LaTeX equation recognition
24
+ - HTML table extraction
25
+ - Document structure preservation (headers, lists, formatting)
26
+ - Rotation detection and correction metadata
27
+ - Figure and chart descriptions
28
+ - Natural reading order inference
29
+ - High-quality OCR for various document types
30
+
31
+ Model: allenai/olmOCR-2-7B-1025-FP8
32
+ Based on: Qwen2.5-VL-7B-Instruct fine-tuned on olmOCR-mix
33
+ """
34
+
35
+ import argparse
36
+ import base64
37
+ import io
38
+ import json
39
+ import logging
40
+ import os
41
+ import re
42
+ import sys
43
+ from datetime import datetime
44
+ from typing import Any, Dict, List, Union
45
+
46
+ import torch
47
+ import yaml
48
+ from datasets import load_dataset
49
+ from huggingface_hub import DatasetCard, login
50
+ from PIL import Image
51
+ from toolz import partition_all
52
+ from tqdm.auto import tqdm
53
+ from vllm import LLM, SamplingParams
54
+ from vllm.sampling_params import GuidedDecodingParams
55
+
56
+ logging.basicConfig(level=logging.INFO)
57
+ logger = logging.getLogger(__name__)
58
+
59
+ # olmOCR no-anchoring prompt (from olmocr/prompts/prompts.py:build_no_anchoring_v4_yaml_prompt)
60
+ OLMOCR_PROMPT = (
61
+ "Attached is one page of a document that you must process. "
62
+ "Just return the plain text representation of this document as if you were reading it naturally. "
63
+ "Convert equations to LateX and tables to HTML.\n"
64
+ "If there are any figures or charts, label them with the following markdown syntax "
65
+ "![Alt text describing the contents of the figure](page_startx_starty_width_height.png)\n"
66
+ "Return your output as markdown, with a front matter section on top specifying values for the "
67
+ "primary_language, is_rotation_valid, rotation_correction, is_table, and is_diagram parameters."
68
+ )
69
+
70
+
71
+ def check_cuda_availability():
72
+ """Check if CUDA is available and exit if not."""
73
+ if not torch.cuda.is_available():
74
+ logger.error("CUDA is not available. This script requires a GPU.")
75
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
76
+ sys.exit(1)
77
+ else:
78
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
79
+
80
+
81
+ def parse_yaml_frontmatter(text: str) -> tuple[dict, str]:
82
+ """
83
+ Parse YAML front matter from olmOCR output.
84
+
85
+ Expected format:
86
+ ---
87
+ primary_language: en
88
+ is_rotation_valid: true
89
+ rotation_correction: 0
90
+ is_table: false
91
+ is_diagram: false
92
+ ---
93
+ # Document content here...
94
+
95
+ Returns:
96
+ (metadata_dict, content_without_frontmatter)
97
+ """
98
+ # Match YAML front matter between --- markers
99
+ pattern = r"^---\s*\n(.*?)\n---\s*\n(.*)$"
100
+ match = re.match(pattern, text.strip(), re.DOTALL)
101
+
102
+ if match:
103
+ yaml_str = match.group(1)
104
+ content = match.group(2)
105
+ try:
106
+ metadata = yaml.safe_load(yaml_str)
107
+ return metadata or {}, content
108
+ except yaml.YAMLError as e:
109
+ logger.warning(f"Failed to parse YAML front matter: {e}")
110
+ return {}, text
111
+ else:
112
+ # No front matter found, return empty metadata
113
+ logger.warning("No YAML front matter found in output")
114
+ return {}, text
115
+
116
+
117
+ def make_ocr_message(
118
+ image: Union[Image.Image, Dict[str, Any], str],
119
+ prompt: str = OLMOCR_PROMPT,
120
+ target_longest_dim: int = 1288,
121
+ ) -> List[Dict]:
122
+ """Create chat message for olmOCR processing.
123
+
124
+ Args:
125
+ image: Input image (PIL Image, dict with bytes, or path)
126
+ prompt: OCR prompt text
127
+ target_longest_dim: Target size for longest image dimension (default 1288, matching olmOCR)
128
+ """
129
+ # Convert to PIL Image if needed
130
+ if isinstance(image, Image.Image):
131
+ pil_img = image
132
+ elif isinstance(image, dict) and "bytes" in image:
133
+ pil_img = Image.open(io.BytesIO(image["bytes"]))
134
+ elif isinstance(image, str):
135
+ pil_img = Image.open(image)
136
+ else:
137
+ raise ValueError(f"Unsupported image type: {type(image)}")
138
+
139
+ # Resize image to target dimension (matching olmOCR pipeline default of 1288px)
140
+ width, height = pil_img.size
141
+ longest_side = max(width, height)
142
+ if longest_side != target_longest_dim:
143
+ scale = target_longest_dim / longest_side
144
+ new_width = int(width * scale)
145
+ new_height = int(height * scale)
146
+ pil_img = pil_img.resize((new_width, new_height), Image.Resampling.LANCZOS)
147
+ logger.debug(f"Resized image from {width}x{height} to {new_width}x{new_height}")
148
+
149
+ # Convert to base64 data URI
150
+ buf = io.BytesIO()
151
+ pil_img.save(buf, format="PNG")
152
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
153
+
154
+ # Return message in vLLM format (text before image, matching olmOCR pipeline)
155
+ return [
156
+ {
157
+ "role": "user",
158
+ "content": [
159
+ {"type": "text", "text": prompt},
160
+ {"type": "image_url", "image_url": {"url": data_uri}},
161
+ ],
162
+ }
163
+ ]
164
+
165
+
166
+ def create_dataset_card(
167
+ source_dataset: str,
168
+ model: str,
169
+ num_samples: int,
170
+ processing_time: str,
171
+ batch_size: int,
172
+ max_model_len: int,
173
+ max_tokens: int,
174
+ gpu_memory_utilization: float,
175
+ image_column: str = "image",
176
+ split: str = "train",
177
+ ) -> str:
178
+ """Create a dataset card documenting the OCR process."""
179
+ model_name = model.split("/")[-1]
180
+
181
+ return f"""---
182
+ tags:
183
+ - ocr
184
+ - document-processing
185
+ - olmocr
186
+ - markdown
187
+ - uv-script
188
+ - generated
189
+ ---
190
+
191
+ # Document OCR using {model_name}
192
+
193
+ This dataset contains markdown-formatted OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using olmOCR-2-7B.
194
+
195
+ ## Processing Details
196
+
197
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
198
+ - **Model**: [{model}](https://huggingface.co/{model})
199
+ - **Number of Samples**: {num_samples:,}
200
+ - **Processing Time**: {processing_time}
201
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
202
+
203
+ ### Configuration
204
+
205
+ - **Image Column**: `{image_column}`
206
+ - **Output Column**: `markdown`
207
+ - **Dataset Split**: `{split}`
208
+ - **Batch Size**: {batch_size}
209
+ - **Max Model Length**: {max_model_len:,} tokens
210
+ - **Max Output Tokens**: {max_tokens:,}
211
+ - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
212
+
213
+ ## Model Information
214
+
215
+ olmOCR-2-7B is a high-quality document OCR model based on Qwen2.5-VL-7B-Instruct, fine-tuned on olmOCR-mix-1025 dataset and optimized with GRPO reinforcement learning.
216
+
217
+ Key features:
218
+ - 📐 **LaTeX equations** - Mathematical formulas in LaTeX format
219
+ - 📊 **HTML tables** - Structured table extraction
220
+ - 📝 **Document structure** - Headers, lists, formatting preserved
221
+ - 🖼️ **Figure descriptions** - Charts and figures labeled with descriptions
222
+ - 🔄 **Rotation detection** - Metadata about document orientation
223
+ - 📑 **Natural reading order** - Handles multi-column and complex layouts
224
+ - 🎯 **High accuracy** - Scores 82.4 ± 1.1 on olmOCR-Bench
225
+
226
+ ## Output Format
227
+
228
+ Each row contains:
229
+ - Original image from source dataset
230
+ - `markdown`: Extracted document content in markdown format
231
+ - `olmocr_metadata`: JSON with document metadata (language, rotation, table/diagram flags)
232
+
233
+ ## Columns
234
+
235
+ - `{image_column}`: Original document image
236
+ - `markdown`: Extracted text and structure in markdown
237
+ - `olmocr_metadata`: Document metadata (primary_language, is_rotation_valid, rotation_correction, is_table, is_diagram)
238
+ - `inference_info`: Processing metadata (model, script version, timestamp)
239
+
240
+ ## Reproduction
241
+
242
+ ```bash
243
+ # Using HF Jobs (recommended)
244
+ hf jobs uv run --flavor l4x1 \\
245
+ -s HF_TOKEN \\
246
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/olmocr2-vllm.py \\
247
+ {source_dataset} \\
248
+ your-username/output-dataset
249
+
250
+ # Local with GPU
251
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/olmocr2-vllm.py \\
252
+ {source_dataset} \\
253
+ your-username/output-dataset
254
+ ```
255
+
256
+ ## Citation
257
+
258
+ ```bibtex
259
+ @misc{{olmocr,
260
+ title={{{{olmOCR: Unlocking Trillions of Tokens in PDFs with Vision Language Models}}}},
261
+ author={{Jake Poznanski and Jon Borchardt and Jason Dunkelberger and Regan Huff and Daniel Lin and Aman Rangapur and Christopher Wilhelm and Kyle Lo and Luca Soldaini}},
262
+ year={{2025}},
263
+ eprint={{2502.18443}},
264
+ archivePrefix={{arXiv}},
265
+ primaryClass={{cs.CL}},
266
+ url={{https://arxiv.org/abs/2502.18443}},
267
+ }}
268
+ ```
269
+
270
+ ---
271
+ *Generated with [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr)*
272
+ """
273
+
274
+
275
+ def main(
276
+ input_dataset: str,
277
+ output_dataset: str,
278
+ image_column: str = "image",
279
+ output_column: str = "markdown",
280
+ batch_size: int = 16,
281
+ model: str = "allenai/olmOCR-2-7B-1025-FP8",
282
+ max_model_len: int = 16384,
283
+ max_tokens: int = 8192,
284
+ temperature: float = 0.1,
285
+ gpu_memory_utilization: float = 0.8,
286
+ guided_decoding: bool = False,
287
+ hf_token: str = None,
288
+ split: str = "train",
289
+ max_samples: int = None,
290
+ private: bool = False,
291
+ shuffle: bool = False,
292
+ seed: int = 42,
293
+ ):
294
+ """
295
+ Process a dataset of document images through olmOCR-2 to extract markdown.
296
+
297
+ Args:
298
+ input_dataset: HuggingFace dataset ID containing images
299
+ output_dataset: HuggingFace dataset ID for output
300
+ image_column: Column name containing images
301
+ output_column: Column name for markdown output
302
+ batch_size: Number of images to process at once
303
+ model: HuggingFace model ID for olmOCR
304
+ max_model_len: Maximum context length
305
+ max_tokens: Maximum tokens to generate per image
306
+ temperature: Sampling temperature (0.1 default, matches olmOCR)
307
+ gpu_memory_utilization: Fraction of GPU memory to use
308
+ guided_decoding: Enable guided decoding with regex for YAML front matter
309
+ hf_token: HuggingFace token for authentication
310
+ split: Dataset split to process
311
+ max_samples: Limit number of samples (for testing)
312
+ private: Make output dataset private
313
+ shuffle: Shuffle dataset before processing
314
+ seed: Random seed for shuffling
315
+ """
316
+ import time
317
+
318
+ start_time = time.time()
319
+
320
+ # Check CUDA availability
321
+ check_cuda_availability()
322
+
323
+ # Login to HuggingFace if token provided
324
+ if hf_token:
325
+ login(token=hf_token)
326
+ elif "HF_TOKEN" in os.environ:
327
+ login(token=os.environ["HF_TOKEN"])
328
+
329
+ # Load dataset
330
+ logger.info(f"Loading dataset: {input_dataset}")
331
+ ds = load_dataset(input_dataset, split=split)
332
+
333
+ # Shuffle if requested
334
+ if shuffle:
335
+ logger.info(f"Shuffling dataset with seed {seed}")
336
+ ds = ds.shuffle(seed=seed)
337
+
338
+ # Limit samples if requested
339
+ if max_samples:
340
+ logger.info(f"Limiting to {max_samples} samples")
341
+ ds = ds.select(range(min(max_samples, len(ds))))
342
+
343
+ logger.info(f"Processing {len(ds)} samples")
344
+ logger.info(f"Output will be written to column: {output_column}")
345
+
346
+ # Set column names - namespace metadata by output column to avoid conflicts
347
+ metadata_column_name = f"{output_column}_metadata"
348
+ inference_info_column = "inference_info"
349
+ logger.info(f"Metadata will be written to column: {metadata_column_name}")
350
+
351
+ # Initialize LLM
352
+ logger.info(f"Initializing vLLM with model: {model}")
353
+ llm = LLM(
354
+ model=model,
355
+ max_model_len=max_model_len,
356
+ gpu_memory_utilization=gpu_memory_utilization,
357
+ limit_mm_per_prompt={"image": 1},
358
+ )
359
+
360
+ # Sampling parameters - olmOCR uses temperature 0.1 (transformers example)
361
+ sampling_params_kwargs = {
362
+ "temperature": temperature,
363
+ "max_tokens": max_tokens,
364
+ "repetition_penalty": 1.05, # Discourage repetitive output
365
+ "stop": ["<|im_end|>", "<|endoftext|>"],
366
+ }
367
+
368
+ # Add guided decoding if requested (enforces YAML front matter structure)
369
+ if guided_decoding:
370
+ logger.info("Enabling guided decoding with YAML front matter regex")
371
+ guided_params = GuidedDecodingParams(
372
+ regex=r"---\nprimary_language: (?:[a-z]{2}|null)\nis_rotation_valid: (?:True|False|true|false)\nrotation_correction: (?:0|90|180|270)\nis_table: (?:True|False|true|false)\nis_diagram: (?:True|False|true|false)\n(?:---|---\n[\s\S]+)"
373
+ )
374
+ sampling_params_kwargs["guided_decoding"] = guided_params
375
+
376
+ sampling_params = SamplingParams(**sampling_params_kwargs)
377
+
378
+ # Process in batches
379
+ all_outputs = []
380
+ all_metadata = []
381
+
382
+ for batch in tqdm(
383
+ list(partition_all(batch_size, ds)),
384
+ desc="Processing batches",
385
+ ):
386
+ # Create messages for batch
387
+ messages = [make_ocr_message(item[image_column]) for item in batch]
388
+
389
+ # Run inference
390
+ outputs = llm.chat(messages, sampling_params=sampling_params)
391
+
392
+ # Extract text and parse YAML front matter
393
+ for idx, output in enumerate(outputs):
394
+ response_text = output.outputs[0].text
395
+ finish_reason = output.outputs[0].finish_reason
396
+
397
+ # Log warning if generation didn't finish naturally
398
+ if finish_reason != "stop":
399
+ logger.warning(
400
+ f"Generation did not finish naturally (reason: {finish_reason}), output may be incomplete"
401
+ )
402
+
403
+ metadata, content = parse_yaml_frontmatter(response_text)
404
+ all_outputs.append(content)
405
+ all_metadata.append(json.dumps(metadata))
406
+
407
+ # Add results to dataset
408
+ # Check if columns already exist and handle appropriately
409
+ if output_column in ds.column_names:
410
+ logger.warning(
411
+ f"Column '{output_column}' already exists, it will be overwritten"
412
+ )
413
+ ds = ds.remove_columns([output_column])
414
+ ds = ds.add_column(output_column, all_outputs)
415
+
416
+ if metadata_column_name in ds.column_names:
417
+ logger.warning(
418
+ f"Column '{metadata_column_name}' already exists, it will be overwritten"
419
+ )
420
+ ds = ds.remove_columns([metadata_column_name])
421
+ ds = ds.add_column(metadata_column_name, all_metadata)
422
+
423
+ # Add inference information
424
+ inference_info = json.dumps(
425
+ {
426
+ "model": model,
427
+ "script": "olmocr2-vllm.py",
428
+ "version": "1.0.0",
429
+ "timestamp": datetime.now().isoformat(),
430
+ "batch_size": batch_size,
431
+ "max_tokens": max_tokens,
432
+ "temperature": temperature,
433
+ }
434
+ )
435
+
436
+ # Handle existing inference_info column
437
+ if inference_info_column in ds.column_names:
438
+ # Parse existing, append new model info
439
+ def update_inference_info(example):
440
+ try:
441
+ existing = json.loads(example[inference_info_column])
442
+ if not isinstance(existing, list):
443
+ existing = [existing]
444
+ except (json.JSONDecodeError, KeyError):
445
+ existing = []
446
+
447
+ existing.append(json.loads(inference_info))
448
+ return {inference_info_column: json.dumps(existing)}
449
+
450
+ ds = ds.map(update_inference_info)
451
+ else:
452
+ ds = ds.add_column(inference_info_column, [inference_info] * len(ds))
453
+
454
+ # Calculate processing time
455
+ elapsed_time = time.time() - start_time
456
+ hours = int(elapsed_time // 3600)
457
+ minutes = int((elapsed_time % 3600) // 60)
458
+ seconds = int(elapsed_time % 60)
459
+ processing_time = f"{hours}h {minutes}m {seconds}s"
460
+
461
+ # Create and save dataset card
462
+ card_content = create_dataset_card(
463
+ source_dataset=input_dataset,
464
+ model=model,
465
+ num_samples=len(ds),
466
+ processing_time=processing_time,
467
+ batch_size=batch_size,
468
+ max_model_len=max_model_len,
469
+ max_tokens=max_tokens,
470
+ gpu_memory_utilization=gpu_memory_utilization,
471
+ image_column=image_column,
472
+ split=split,
473
+ )
474
+
475
+ # Push to hub
476
+ logger.info(f"Pushing to HuggingFace Hub: {output_dataset}")
477
+ ds.push_to_hub(
478
+ output_dataset,
479
+ private=private,
480
+ )
481
+
482
+ # Update dataset card
483
+ card = DatasetCard(card_content)
484
+ card.push_to_hub(output_dataset)
485
+
486
+ logger.info(f"✓ Processing complete!")
487
+ logger.info(f"✓ Dataset: https://huggingface.co/datasets/{output_dataset}")
488
+ logger.info(f"✓ Processing time: {processing_time}")
489
+ logger.info(f"✓ Samples processed: {len(ds):,}")
490
+
491
+
492
+ if __name__ == "__main__":
493
+ parser = argparse.ArgumentParser(
494
+ description="Convert document images to markdown using olmOCR-2",
495
+ formatter_class=argparse.RawDescriptionHelpFormatter,
496
+ epilog="""
497
+ Examples:
498
+
499
+ 1. Basic OCR on a dataset:
500
+ uv run olmocr2-vllm.py input-dataset output-dataset
501
+
502
+ 2. Test with first 10 samples:
503
+ uv run olmocr2-vllm.py input-dataset output-dataset --max-samples 10
504
+
505
+ 3. Process with custom batch size:
506
+ uv run olmocr2-vllm.py input-dataset output-dataset --batch-size 8
507
+
508
+ 4. Custom image column:
509
+ uv run olmocr2-vllm.py input-dataset output-dataset --image-column page_image
510
+
511
+ 5. Private output dataset:
512
+ uv run olmocr2-vllm.py input-dataset output-dataset --private
513
+
514
+ 6. Random sampling:
515
+ uv run olmocr2-vllm.py input-dataset output-dataset --max-samples 100 --shuffle
516
+
517
+ 7. Running on HuggingFace Jobs:
518
+ hf jobs uv run --flavor l4x1 \\
519
+ -s HF_TOKEN \\
520
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/olmocr2-vllm.py \\
521
+ input-dataset output-dataset
522
+
523
+ 8. Real example with historical documents:
524
+ hf jobs uv run --flavor l4x1 \\
525
+ -s HF_TOKEN \\
526
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/olmocr2-vllm.py \\
527
+ NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset \\
528
+ your-username/handbooks-olmocr \\
529
+ --max-samples 100 \\
530
+ --shuffle
531
+ """,
532
+ )
533
+
534
+ parser.add_argument("input_dataset", help="Input HuggingFace dataset ID")
535
+ parser.add_argument("output_dataset", help="Output HuggingFace dataset ID")
536
+ parser.add_argument(
537
+ "--image-column",
538
+ default="image",
539
+ help="Column name containing images (default: image)",
540
+ )
541
+ parser.add_argument(
542
+ "--output-column",
543
+ default="markdown",
544
+ help="Column name for markdown output (default: markdown)",
545
+ )
546
+ parser.add_argument(
547
+ "--batch-size",
548
+ type=int,
549
+ default=16,
550
+ help="Batch size for processing (default: 16)",
551
+ )
552
+ parser.add_argument(
553
+ "--model",
554
+ default="allenai/olmOCR-2-7B-1025-FP8",
555
+ help="Model to use (default: allenai/olmOCR-2-7B-1025-FP8)",
556
+ )
557
+ parser.add_argument(
558
+ "--max-model-len",
559
+ type=int,
560
+ default=16384,
561
+ help="Maximum model context length (default: 16384)",
562
+ )
563
+ parser.add_argument(
564
+ "--max-tokens",
565
+ type=int,
566
+ default=8192,
567
+ help="Maximum tokens to generate (default: 8192)",
568
+ )
569
+ parser.add_argument(
570
+ "--temperature",
571
+ type=float,
572
+ default=0.1,
573
+ help="Sampling temperature (default: 0.1, matches olmOCR transformers example)",
574
+ )
575
+ parser.add_argument(
576
+ "--gpu-memory-utilization",
577
+ type=float,
578
+ default=0.8,
579
+ help="GPU memory utilization (default: 0.8)",
580
+ )
581
+ parser.add_argument(
582
+ "--guided-decoding",
583
+ action="store_true",
584
+ help="Enable guided decoding with regex for YAML front matter structure",
585
+ )
586
+ parser.add_argument(
587
+ "--hf-token",
588
+ help="HuggingFace token (or set HF_TOKEN env var)",
589
+ )
590
+ parser.add_argument(
591
+ "--split",
592
+ default="train",
593
+ help="Dataset split to process (default: train)",
594
+ )
595
+ parser.add_argument(
596
+ "--max-samples",
597
+ type=int,
598
+ help="Maximum number of samples to process (for testing)",
599
+ )
600
+ parser.add_argument(
601
+ "--private",
602
+ action="store_true",
603
+ help="Make output dataset private",
604
+ )
605
+ parser.add_argument(
606
+ "--shuffle",
607
+ action="store_true",
608
+ help="Shuffle dataset before processing",
609
+ )
610
+ parser.add_argument(
611
+ "--seed",
612
+ type=int,
613
+ default=42,
614
+ help="Random seed for shuffling (default: 42)",
615
+ )
616
+
617
+ args = parser.parse_args()
618
+ main(
619
+ input_dataset=args.input_dataset,
620
+ output_dataset=args.output_dataset,
621
+ image_column=args.image_column,
622
+ output_column=args.output_column,
623
+ batch_size=args.batch_size,
624
+ model=args.model,
625
+ max_model_len=args.max_model_len,
626
+ max_tokens=args.max_tokens,
627
+ temperature=args.temperature,
628
+ gpu_memory_utilization=args.gpu_memory_utilization,
629
+ guided_decoding=args.guided_decoding,
630
+ hf_token=args.hf_token,
631
+ split=args.split,
632
+ max_samples=args.max_samples,
633
+ private=args.private,
634
+ shuffle=args.shuffle,
635
+ seed=args.seed,
636
+ )
paddleocr-vl.py ADDED
@@ -0,0 +1,699 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub",
6
+ # "pillow",
7
+ # "vllm",
8
+ # "tqdm",
9
+ # "toolz",
10
+ # "torch",
11
+ # "pyarrow",
12
+ # "transformers",
13
+ # ]
14
+ #
15
+ # [[tool.uv.index]]
16
+ # url = "https://wheels.vllm.ai/nightly"
17
+ #
18
+ # [tool.uv]
19
+ # prerelease = "allow"
20
+ # ///
21
+
22
+ """
23
+ Convert document images to text/tables/formulas using PaddleOCR-VL with vLLM.
24
+
25
+ PaddleOCR-VL is a compact 0.9B OCR model with task-specific capabilities for
26
+ document parsing. It combines a NaViT-style dynamic resolution visual encoder
27
+ with the ERNIE-4.5-0.3B language model for accurate element recognition.
28
+
29
+ Features:
30
+ - 🎯 Ultra-compact: Only 0.9B parameters (smallest OCR model)
31
+ - 📝 OCR mode: General text extraction to markdown
32
+ - 📊 Table mode: HTML table recognition and extraction
33
+ - 📐 Formula mode: LaTeX mathematical notation
34
+ - 📈 Chart mode: Structured chart analysis
35
+ - 🌍 Multilingual support
36
+ - ⚡ Fast initialization due to small size
37
+ - 🔧 Based on ERNIE-4.5 (different from Qwen-based models)
38
+
39
+ Model: PaddlePaddle/PaddleOCR-VL
40
+ vLLM: Requires nightly build for full support
41
+ """
42
+
43
+ import argparse
44
+ import base64
45
+ import io
46
+ import json
47
+ import logging
48
+ import math
49
+ import os
50
+ import sys
51
+ from typing import Any, Dict, List, Union
52
+ from datetime import datetime
53
+
54
+ import torch
55
+ from datasets import load_dataset
56
+ from huggingface_hub import DatasetCard, login
57
+ from PIL import Image
58
+ from toolz import partition_all
59
+ from tqdm.auto import tqdm
60
+ from vllm import LLM, SamplingParams
61
+
62
+ logging.basicConfig(level=logging.INFO)
63
+ logger = logging.getLogger(__name__)
64
+
65
+
66
+ # Task mode configurations from official PaddleOCR-VL documentation
67
+ TASK_MODES = {
68
+ "ocr": "OCR:",
69
+ "table": "Table Recognition:",
70
+ "formula": "Formula Recognition:",
71
+ "chart": "Chart Recognition:",
72
+ }
73
+
74
+ # Task descriptions for dataset card
75
+ TASK_DESCRIPTIONS = {
76
+ "ocr": "General text extraction to markdown format",
77
+ "table": "Table extraction to HTML format",
78
+ "formula": "Mathematical formula recognition to LaTeX",
79
+ "chart": "Chart and diagram analysis",
80
+ }
81
+
82
+
83
+ def check_cuda_availability():
84
+ """Check if CUDA is available and exit if not."""
85
+ if not torch.cuda.is_available():
86
+ logger.error("CUDA is not available. This script requires a GPU.")
87
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
88
+ sys.exit(1)
89
+ else:
90
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
91
+
92
+
93
+ def smart_resize(
94
+ height: int,
95
+ width: int,
96
+ factor: int = 28,
97
+ min_pixels: int = 28 * 28 * 130,
98
+ max_pixels: int = 28 * 28 * 1280,
99
+ ) -> tuple[int, int]:
100
+ """
101
+ PaddleOCR-VL's intelligent resize logic.
102
+
103
+ Rescales the image so that:
104
+ 1. Both dimensions are divisible by 'factor' (28)
105
+ 2. Total pixels are within [min_pixels, max_pixels]
106
+ 3. Aspect ratio is maintained as closely as possible
107
+
108
+ Args:
109
+ height: Original image height
110
+ width: Original image width
111
+ factor: Dimension divisibility factor (default: 28)
112
+ min_pixels: Minimum total pixels (default: 100,880)
113
+ max_pixels: Maximum total pixels (default: 1,003,520)
114
+
115
+ Returns:
116
+ Tuple of (new_height, new_width)
117
+ """
118
+ if height < factor:
119
+ width = round((width * factor) / height)
120
+ height = factor
121
+
122
+ if width < factor:
123
+ height = round((height * factor) / width)
124
+ width = factor
125
+
126
+ if max(height, width) / min(height, width) > 200:
127
+ logger.warning(
128
+ f"Extreme aspect ratio detected: {max(height, width) / min(height, width):.1f}"
129
+ )
130
+ # Continue anyway, but warn about potential issues
131
+
132
+ h_bar = round(height / factor) * factor
133
+ w_bar = round(width / factor) * factor
134
+
135
+ if h_bar * w_bar > max_pixels:
136
+ beta = math.sqrt((height * width) / max_pixels)
137
+ h_bar = math.floor(height / beta / factor) * factor
138
+ w_bar = math.floor(width / beta / factor) * factor
139
+ elif h_bar * w_bar < min_pixels:
140
+ beta = math.sqrt(min_pixels / (height * width))
141
+ h_bar = math.ceil(height * beta / factor) * factor
142
+ w_bar = math.ceil(width * beta / factor) * factor
143
+
144
+ return h_bar, w_bar
145
+
146
+
147
+ def make_ocr_message(
148
+ image: Union[Image.Image, Dict[str, Any], str],
149
+ task_mode: str = "ocr",
150
+ apply_smart_resize: bool = True,
151
+ ) -> List[Dict]:
152
+ """
153
+ Create chat message for PaddleOCR-VL processing.
154
+
155
+ PaddleOCR-VL expects a specific format with the task prefix after the image.
156
+ """
157
+ # Convert to PIL Image if needed
158
+ if isinstance(image, Image.Image):
159
+ pil_img = image
160
+ elif isinstance(image, dict) and "bytes" in image:
161
+ pil_img = Image.open(io.BytesIO(image["bytes"]))
162
+ elif isinstance(image, str):
163
+ pil_img = Image.open(image)
164
+ else:
165
+ raise ValueError(f"Unsupported image type: {type(image)}")
166
+
167
+ # Convert to RGB
168
+ pil_img = pil_img.convert("RGB")
169
+
170
+ # Apply smart resize if requested
171
+ if apply_smart_resize:
172
+ original_size = pil_img.size
173
+ new_width, new_height = smart_resize(pil_img.height, pil_img.width)
174
+ if (new_width, new_height) != (pil_img.width, pil_img.height):
175
+ pil_img = pil_img.resize((new_width, new_height), Image.Resampling.LANCZOS)
176
+ logger.debug(f"Resized image from {original_size} to {pil_img.size}")
177
+
178
+ # Convert to base64 data URI
179
+ buf = io.BytesIO()
180
+ pil_img.save(buf, format="PNG")
181
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
182
+
183
+ # PaddleOCR-VL message format: image first, then task prefix
184
+ return [
185
+ {
186
+ "role": "user",
187
+ "content": [
188
+ {"type": "image_url", "image_url": {"url": data_uri}},
189
+ {"type": "text", "text": TASK_MODES[task_mode]},
190
+ ],
191
+ }
192
+ ]
193
+
194
+
195
+ def create_dataset_card(
196
+ source_dataset: str,
197
+ model: str,
198
+ task_mode: str,
199
+ num_samples: int,
200
+ processing_time: str,
201
+ batch_size: int,
202
+ max_model_len: int,
203
+ max_tokens: int,
204
+ gpu_memory_utilization: float,
205
+ temperature: float,
206
+ apply_smart_resize: bool,
207
+ image_column: str = "image",
208
+ split: str = "train",
209
+ ) -> str:
210
+ """Create a dataset card documenting the OCR process."""
211
+ task_description = TASK_DESCRIPTIONS[task_mode]
212
+
213
+ return f"""---
214
+ tags:
215
+ - ocr
216
+ - document-processing
217
+ - paddleocr-vl
218
+ - {task_mode}
219
+ - uv-script
220
+ - generated
221
+ ---
222
+
223
+ # Document Processing using PaddleOCR-VL ({task_mode.upper()} mode)
224
+
225
+ This dataset contains {task_mode.upper()} results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using PaddleOCR-VL, an ultra-compact 0.9B OCR model.
226
+
227
+ ## Processing Details
228
+
229
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
230
+ - **Model**: [{model}](https://huggingface.co/{model})
231
+ - **Task Mode**: `{task_mode}` - {task_description}
232
+ - **Number of Samples**: {num_samples:,}
233
+ - **Processing Time**: {processing_time}
234
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
235
+
236
+ ### Configuration
237
+
238
+ - **Image Column**: `{image_column}`
239
+ - **Output Column**: `paddleocr_{task_mode}`
240
+ - **Dataset Split**: `{split}`
241
+ - **Batch Size**: {batch_size}
242
+ - **Smart Resize**: {"Enabled" if apply_smart_resize else "Disabled"}
243
+ - **Max Model Length**: {max_model_len:,} tokens
244
+ - **Max Output Tokens**: {max_tokens:,}
245
+ - **Temperature**: {temperature}
246
+ - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
247
+
248
+ ## Model Information
249
+
250
+ PaddleOCR-VL is a state-of-the-art, resource-efficient model tailored for document parsing:
251
+ - 🎯 **Ultra-compact** - Only 0.9B parameters (smallest OCR model)
252
+ - 📝 **OCR mode** - General text extraction
253
+ - 📊 **Table mode** - HTML table recognition
254
+ - 📐 **Formula mode** - LaTeX mathematical notation
255
+ - 📈 **Chart mode** - Structured chart analysis
256
+ - 🌍 **Multilingual** - Support for multiple languages
257
+ - ⚡ **Fast** - Quick initialization and inference
258
+ - 🔧 **ERNIE-4.5 based** - Different architecture from Qwen models
259
+
260
+ ### Task Modes
261
+
262
+ - **OCR**: Extract text content to markdown format
263
+ - **Table Recognition**: Extract tables to HTML format
264
+ - **Formula Recognition**: Extract mathematical formulas to LaTeX
265
+ - **Chart Recognition**: Analyze and describe charts/diagrams
266
+
267
+ ## Dataset Structure
268
+
269
+ The dataset contains all original columns plus:
270
+ - `paddleocr_{task_mode}`: The extracted content based on task mode
271
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
272
+
273
+ ## Usage
274
+
275
+ ```python
276
+ from datasets import load_dataset
277
+ import json
278
+
279
+ # Load the dataset
280
+ dataset = load_dataset("{{output_dataset_id}}", split="{split}")
281
+
282
+ # Access the extracted content
283
+ for example in dataset:
284
+ print(example["paddleocr_{task_mode}"])
285
+ break
286
+
287
+ # View all OCR models applied to this dataset
288
+ inference_info = json.loads(dataset[0]["inference_info"])
289
+ for info in inference_info:
290
+ print(f"Task: {{info['task_mode']}} - Model: {{info['model_id']}}")
291
+ ```
292
+
293
+ ## Reproduction
294
+
295
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) PaddleOCR-VL script:
296
+
297
+ ```bash
298
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \\
299
+ {source_dataset} \\
300
+ <output-dataset> \\
301
+ --task-mode {task_mode} \\
302
+ --image-column {image_column} \\
303
+ --batch-size {batch_size} \\
304
+ --max-model-len {max_model_len} \\
305
+ --max-tokens {max_tokens} \\
306
+ --gpu-memory-utilization {gpu_memory_utilization}
307
+ ```
308
+
309
+ ## Performance
310
+
311
+ - **Model Size**: 0.9B parameters (smallest among OCR models)
312
+ - **Processing Speed**: ~{num_samples / (float(processing_time.split()[0]) * 60):.2f} images/second
313
+ - **Architecture**: NaViT visual encoder + ERNIE-4.5-0.3B language model
314
+
315
+ Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
316
+ """
317
+
318
+
319
+ def main(
320
+ input_dataset: str,
321
+ output_dataset: str,
322
+ image_column: str = "image",
323
+ batch_size: int = 16,
324
+ task_mode: str = "ocr",
325
+ max_model_len: int = 8192,
326
+ max_tokens: int = 4096,
327
+ temperature: float = 0.0,
328
+ gpu_memory_utilization: float = 0.8,
329
+ apply_smart_resize: bool = True,
330
+ hf_token: str = None,
331
+ split: str = "train",
332
+ max_samples: int = None,
333
+ private: bool = False,
334
+ shuffle: bool = False,
335
+ seed: int = 42,
336
+ output_column: str = None,
337
+ ):
338
+ """Process images from HF dataset through PaddleOCR-VL model."""
339
+
340
+ # Check CUDA availability first
341
+ check_cuda_availability()
342
+
343
+ # Track processing start time
344
+ start_time = datetime.now()
345
+
346
+ # Enable HF_TRANSFER for faster downloads
347
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
348
+
349
+ # Login to HF if token provided
350
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
351
+ if HF_TOKEN:
352
+ login(token=HF_TOKEN)
353
+
354
+ # Validate task mode
355
+ if task_mode not in TASK_MODES:
356
+ raise ValueError(
357
+ f"Invalid task_mode '{task_mode}'. Choose from: {list(TASK_MODES.keys())}"
358
+ )
359
+
360
+ # Auto-generate output column name based on task mode
361
+ if output_column is None:
362
+ output_column = f"paddleocr_{task_mode}"
363
+
364
+ logger.info(f"Using task mode: {task_mode} - {TASK_DESCRIPTIONS[task_mode]}")
365
+ logger.info(f"Output will be written to column: {output_column}")
366
+
367
+ # Load dataset
368
+ logger.info(f"Loading dataset: {input_dataset}")
369
+ dataset = load_dataset(input_dataset, split=split)
370
+
371
+ # Validate image column
372
+ if image_column not in dataset.column_names:
373
+ raise ValueError(
374
+ f"Column '{image_column}' not found. Available: {dataset.column_names}"
375
+ )
376
+
377
+ # Shuffle if requested
378
+ if shuffle:
379
+ logger.info(f"Shuffling dataset with seed {seed}")
380
+ dataset = dataset.shuffle(seed=seed)
381
+
382
+ # Limit samples if requested
383
+ if max_samples:
384
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
385
+ logger.info(f"Limited to {len(dataset)} samples")
386
+
387
+ # Initialize vLLM model
388
+ model_name = "PaddlePaddle/PaddleOCR-VL"
389
+ logger.info(f"Initializing vLLM with {model_name}")
390
+ logger.info("This may take a minute on first run (model is only 0.9B)...")
391
+
392
+ # Note: PaddleOCR-VL requires specific vLLM configuration
393
+ # The model needs custom implementation files to be loaded
394
+ os.environ["VLLM_USE_V1"] = "0" # Disable V1 engine for compatibility
395
+
396
+ llm = LLM(
397
+ model=model_name,
398
+ trust_remote_code=True,
399
+ max_model_len=max_model_len,
400
+ gpu_memory_utilization=gpu_memory_utilization,
401
+ limit_mm_per_prompt={"image": 1},
402
+ max_num_batched_tokens=16384, # Match server config
403
+ enable_prefix_caching=False, # Disable prefix caching like server
404
+ enforce_eager=True, # Use eager mode instead of CUDA graphs
405
+ )
406
+
407
+ # Sampling parameters - deterministic for OCR
408
+ sampling_params = SamplingParams(
409
+ temperature=temperature,
410
+ max_tokens=max_tokens,
411
+ )
412
+
413
+ logger.info(f"Processing {len(dataset)} images in batches of {batch_size}")
414
+ if apply_smart_resize:
415
+ logger.info("Smart resize enabled (PaddleOCR-VL's adaptive resolution)")
416
+
417
+ # Process images in batches
418
+ all_outputs = []
419
+
420
+ for batch_indices in tqdm(
421
+ partition_all(batch_size, range(len(dataset))),
422
+ total=(len(dataset) + batch_size - 1) // batch_size,
423
+ desc=f"PaddleOCR-VL {task_mode.upper()} processing",
424
+ ):
425
+ batch_indices = list(batch_indices)
426
+ batch_images = [dataset[i][image_column] for i in batch_indices]
427
+
428
+ try:
429
+ # Create messages for batch with task-specific prefix
430
+ batch_messages = [
431
+ make_ocr_message(
432
+ img, task_mode=task_mode, apply_smart_resize=apply_smart_resize
433
+ )
434
+ for img in batch_images
435
+ ]
436
+
437
+ # Process with vLLM
438
+ outputs = llm.chat(batch_messages, sampling_params)
439
+
440
+ # Extract outputs
441
+ for output in outputs:
442
+ text = output.outputs[0].text.strip()
443
+ all_outputs.append(text)
444
+
445
+ except Exception as e:
446
+ logger.error(f"Error processing batch: {e}")
447
+ # Add error placeholders for failed batch
448
+ all_outputs.extend([f"[{task_mode.upper()} ERROR]"] * len(batch_images))
449
+
450
+ # Calculate processing time
451
+ processing_duration = datetime.now() - start_time
452
+ processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
453
+
454
+ # Add output column to dataset
455
+ logger.info(f"Adding '{output_column}' column to dataset")
456
+ dataset = dataset.add_column(output_column, all_outputs)
457
+
458
+ # Handle inference_info tracking (for multi-model comparisons)
459
+ inference_entry = {
460
+ "model_id": model_name,
461
+ "model_name": "PaddleOCR-VL",
462
+ "model_size": "0.9B",
463
+ "task_mode": task_mode,
464
+ "column_name": output_column,
465
+ "timestamp": datetime.now().isoformat(),
466
+ "temperature": temperature,
467
+ "max_tokens": max_tokens,
468
+ "smart_resize": apply_smart_resize,
469
+ }
470
+
471
+ if "inference_info" in dataset.column_names:
472
+ # Append to existing inference info
473
+ logger.info("Updating existing inference_info column")
474
+
475
+ def update_inference_info(example):
476
+ try:
477
+ existing_info = (
478
+ json.loads(example["inference_info"])
479
+ if example["inference_info"]
480
+ else []
481
+ )
482
+ except (json.JSONDecodeError, TypeError):
483
+ existing_info = []
484
+
485
+ existing_info.append(inference_entry)
486
+ return {"inference_info": json.dumps(existing_info)}
487
+
488
+ dataset = dataset.map(update_inference_info)
489
+ else:
490
+ # Create new inference_info column
491
+ logger.info("Creating new inference_info column")
492
+ inference_list = [json.dumps([inference_entry])] * len(dataset)
493
+ dataset = dataset.add_column("inference_info", inference_list)
494
+
495
+ # Push to hub
496
+ logger.info(f"Pushing to {output_dataset}")
497
+ dataset.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
498
+
499
+ # Create and push dataset card
500
+ logger.info("Creating dataset card")
501
+ card_content = create_dataset_card(
502
+ source_dataset=input_dataset,
503
+ model=model_name,
504
+ task_mode=task_mode,
505
+ num_samples=len(dataset),
506
+ processing_time=processing_time_str,
507
+ batch_size=batch_size,
508
+ max_model_len=max_model_len,
509
+ max_tokens=max_tokens,
510
+ gpu_memory_utilization=gpu_memory_utilization,
511
+ temperature=temperature,
512
+ apply_smart_resize=apply_smart_resize,
513
+ image_column=image_column,
514
+ split=split,
515
+ )
516
+
517
+ card = DatasetCard(card_content)
518
+ card.push_to_hub(output_dataset, token=HF_TOKEN)
519
+
520
+ logger.info("✅ PaddleOCR-VL processing complete!")
521
+ logger.info(
522
+ f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
523
+ )
524
+ logger.info(f"Processing time: {processing_time_str}")
525
+ logger.info(f"Task mode: {task_mode} - {TASK_DESCRIPTIONS[task_mode]}")
526
+
527
+
528
+ if __name__ == "__main__":
529
+ # Show example usage if no arguments
530
+ if len(sys.argv) == 1:
531
+ print("=" * 80)
532
+ print("PaddleOCR-VL Document Processing")
533
+ print("=" * 80)
534
+ print("\nUltra-compact 0.9B OCR model with task-specific capabilities")
535
+ print("\nFeatures:")
536
+ print("- 🎯 Smallest OCR model - Only 0.9B parameters")
537
+ print("- 📝 OCR mode - General text extraction")
538
+ print("- 📊 Table mode - HTML table recognition")
539
+ print("- 📐 Formula mode - LaTeX mathematical notation")
540
+ print("- 📈 Chart mode - Structured chart analysis")
541
+ print("- 🌍 Multilingual support")
542
+ print("- ⚡ Fast initialization and inference")
543
+ print("- 🔧 Based on ERNIE-4.5 (unique architecture)")
544
+ print("\nTask Modes:")
545
+ for mode, description in TASK_DESCRIPTIONS.items():
546
+ print(f" {mode:8} - {description}")
547
+ print("\nExample usage:")
548
+ print("\n1. Basic OCR (default mode):")
549
+ print(" uv run paddleocr-vl.py input-dataset output-dataset")
550
+ print("\n2. Table extraction:")
551
+ print(" uv run paddleocr-vl.py docs tables-extracted --task-mode table")
552
+ print("\n3. Formula recognition:")
553
+ print(
554
+ " uv run paddleocr-vl.py papers formulas --task-mode formula --batch-size 32"
555
+ )
556
+ print("\n4. Chart analysis:")
557
+ print(" uv run paddleocr-vl.py diagrams charts-analyzed --task-mode chart")
558
+ print("\n5. Test with small sample:")
559
+ print(" uv run paddleocr-vl.py dataset test --max-samples 10 --shuffle")
560
+ print("\n6. Running on HF Jobs:")
561
+ print(" hf jobs uv run --flavor l4x1 \\")
562
+ print(
563
+ ' -e HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \\'
564
+ )
565
+ print(" -e HF_HUB_ENABLE_HF_TRANSFER=1 \\")
566
+ print(
567
+ " https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \\"
568
+ )
569
+ print(" input-dataset output-dataset --task-mode ocr")
570
+ print("\n" + "=" * 80)
571
+ print("\nFor full help, run: uv run paddleocr-vl.py --help")
572
+ sys.exit(0)
573
+
574
+ parser = argparse.ArgumentParser(
575
+ description="Document processing using PaddleOCR-VL (0.9B task-specific model)",
576
+ formatter_class=argparse.RawDescriptionHelpFormatter,
577
+ epilog="""
578
+ Task Modes:
579
+ ocr General text extraction to markdown (default)
580
+ table Table extraction to HTML format
581
+ formula Mathematical formula recognition to LaTeX
582
+ chart Chart and diagram analysis
583
+
584
+ Examples:
585
+ # Basic text OCR
586
+ uv run paddleocr-vl.py my-docs analyzed-docs
587
+
588
+ # Extract tables from documents
589
+ uv run paddleocr-vl.py papers tables --task-mode table
590
+
591
+ # Recognize mathematical formulas
592
+ uv run paddleocr-vl.py textbooks formulas --task-mode formula
593
+
594
+ # Analyze charts and diagrams
595
+ uv run paddleocr-vl.py reports charts --task-mode chart
596
+
597
+ # Test with random sampling
598
+ uv run paddleocr-vl.py large-dataset test --max-samples 50 --shuffle --task-mode ocr
599
+
600
+ # Disable smart resize for original resolution
601
+ uv run paddleocr-vl.py images output --no-smart-resize
602
+ """,
603
+ )
604
+
605
+ parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
606
+ parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
607
+ parser.add_argument(
608
+ "--image-column",
609
+ default="image",
610
+ help="Column containing images (default: image)",
611
+ )
612
+ parser.add_argument(
613
+ "--batch-size",
614
+ type=int,
615
+ default=16,
616
+ help="Batch size for processing (default: 16)",
617
+ )
618
+ parser.add_argument(
619
+ "--task-mode",
620
+ choices=list(TASK_MODES.keys()),
621
+ default="ocr",
622
+ help="Task type: ocr (default), table, formula, or chart",
623
+ )
624
+ parser.add_argument(
625
+ "--max-model-len",
626
+ type=int,
627
+ default=8192,
628
+ help="Maximum model context length (default: 8192)",
629
+ )
630
+ parser.add_argument(
631
+ "--max-tokens",
632
+ type=int,
633
+ default=4096,
634
+ help="Maximum tokens to generate (default: 4096)",
635
+ )
636
+ parser.add_argument(
637
+ "--temperature",
638
+ type=float,
639
+ default=0.0,
640
+ help="Sampling temperature (default: 0.0 for deterministic)",
641
+ )
642
+ parser.add_argument(
643
+ "--gpu-memory-utilization",
644
+ type=float,
645
+ default=0.8,
646
+ help="GPU memory utilization (default: 0.8)",
647
+ )
648
+ parser.add_argument(
649
+ "--no-smart-resize",
650
+ action="store_true",
651
+ help="Disable PaddleOCR-VL's smart resize, use original image size",
652
+ )
653
+ parser.add_argument("--hf-token", help="Hugging Face API token")
654
+ parser.add_argument(
655
+ "--split", default="train", help="Dataset split to use (default: train)"
656
+ )
657
+ parser.add_argument(
658
+ "--max-samples",
659
+ type=int,
660
+ help="Maximum number of samples to process (for testing)",
661
+ )
662
+ parser.add_argument(
663
+ "--private", action="store_true", help="Make output dataset private"
664
+ )
665
+ parser.add_argument(
666
+ "--shuffle", action="store_true", help="Shuffle dataset before processing"
667
+ )
668
+ parser.add_argument(
669
+ "--seed",
670
+ type=int,
671
+ default=42,
672
+ help="Random seed for shuffling (default: 42)",
673
+ )
674
+ parser.add_argument(
675
+ "--output-column",
676
+ help="Column name for output (default: paddleocr_[task_mode])",
677
+ )
678
+
679
+ args = parser.parse_args()
680
+
681
+ main(
682
+ input_dataset=args.input_dataset,
683
+ output_dataset=args.output_dataset,
684
+ image_column=args.image_column,
685
+ batch_size=args.batch_size,
686
+ task_mode=args.task_mode,
687
+ max_model_len=args.max_model_len,
688
+ max_tokens=args.max_tokens,
689
+ temperature=args.temperature,
690
+ gpu_memory_utilization=args.gpu_memory_utilization,
691
+ apply_smart_resize=not args.no_smart_resize,
692
+ hf_token=args.hf_token,
693
+ split=args.split,
694
+ max_samples=args.max_samples,
695
+ private=args.private,
696
+ shuffle=args.shuffle,
697
+ seed=args.seed,
698
+ output_column=args.output_column,
699
+ )
rolm-ocr.py ADDED
@@ -0,0 +1,517 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub[hf_transfer]",
6
+ # "pillow",
7
+ # "vllm",
8
+ # "tqdm",
9
+ # "toolz",
10
+ # "torch", # Added for CUDA check
11
+ # ]
12
+ #
13
+ # ///
14
+
15
+ """
16
+ Extract text from document images using RolmOCR with vLLM.
17
+
18
+ This script processes images through the RolmOCR model to extract
19
+ plain text content, ideal for general-purpose OCR tasks.
20
+
21
+ Features:
22
+ - Fast and efficient text extraction
23
+ - General-purpose document OCR
24
+ - Based on Qwen2.5-VL-7B architecture
25
+ - Optimized for batch processing with vLLM
26
+ """
27
+
28
+ import argparse
29
+ import base64
30
+ import io
31
+ import json
32
+ import logging
33
+ import os
34
+ import sys
35
+ from typing import Any, Dict, List, Union
36
+
37
+ import torch
38
+ from datasets import load_dataset
39
+ from huggingface_hub import DatasetCard, login
40
+ from PIL import Image
41
+ from toolz import partition_all
42
+ from tqdm.auto import tqdm
43
+ from vllm import LLM, SamplingParams
44
+ from datetime import datetime
45
+
46
+ logging.basicConfig(level=logging.INFO)
47
+ logger = logging.getLogger(__name__)
48
+
49
+
50
+ def check_cuda_availability():
51
+ """Check if CUDA is available and exit if not."""
52
+ if not torch.cuda.is_available():
53
+ logger.error("CUDA is not available. This script requires a GPU.")
54
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
55
+ sys.exit(1)
56
+ else:
57
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
58
+
59
+
60
+ def make_ocr_message(
61
+ image: Union[Image.Image, Dict[str, Any], str],
62
+ prompt: str = "Return the plain text representation of this document as if you were reading it naturally.\n",
63
+ ) -> List[Dict]:
64
+ """Create chat message for OCR processing."""
65
+ # Convert to PIL Image if needed
66
+ if isinstance(image, Image.Image):
67
+ pil_img = image
68
+ elif isinstance(image, dict) and "bytes" in image:
69
+ pil_img = Image.open(io.BytesIO(image["bytes"]))
70
+ elif isinstance(image, str):
71
+ pil_img = Image.open(image)
72
+ else:
73
+ raise ValueError(f"Unsupported image type: {type(image)}")
74
+
75
+ # Convert to base64 data URI
76
+ buf = io.BytesIO()
77
+ pil_img.save(buf, format="PNG")
78
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
79
+
80
+ # Return message in vLLM format
81
+ return [
82
+ {
83
+ "role": "user",
84
+ "content": [
85
+ {"type": "image_url", "image_url": {"url": data_uri}},
86
+ {"type": "text", "text": prompt},
87
+ ],
88
+ }
89
+ ]
90
+
91
+
92
+ def create_dataset_card(
93
+ source_dataset: str,
94
+ model: str,
95
+ num_samples: int,
96
+ processing_time: str,
97
+ output_column: str,
98
+ batch_size: int,
99
+ max_model_len: int,
100
+ max_tokens: int,
101
+ gpu_memory_utilization: float,
102
+ image_column: str = "image",
103
+ split: str = "train",
104
+ ) -> str:
105
+ """Create a dataset card documenting the OCR process."""
106
+ model_name = model.split("/")[-1]
107
+
108
+ return f"""---
109
+ viewer: false
110
+ tags:
111
+ - ocr
112
+ - text-extraction
113
+ - rolmocr
114
+ - uv-script
115
+ - generated
116
+ ---
117
+
118
+ # OCR Text Extraction using {model_name}
119
+
120
+ This dataset contains extracted text from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using RolmOCR.
121
+
122
+ ## Processing Details
123
+
124
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
125
+ - **Model**: [{model}](https://huggingface.co/{model})
126
+ - **Number of Samples**: {num_samples:,}
127
+ - **Processing Time**: {processing_time}
128
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
129
+
130
+ ### Configuration
131
+
132
+ - **Image Column**: `{image_column}`
133
+ - **Output Column**: `{output_column}`
134
+ - **Dataset Split**: `{split}`
135
+ - **Batch Size**: {batch_size}
136
+ - **Max Model Length**: {max_model_len:,} tokens
137
+ - **Max Output Tokens**: {max_tokens:,}
138
+ - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
139
+
140
+ ## Model Information
141
+
142
+ RolmOCR is a fast, general-purpose OCR model based on Qwen2.5-VL-7B architecture. It extracts plain text from document images with high accuracy and efficiency.
143
+
144
+ ## Dataset Structure
145
+
146
+ The dataset contains all original columns plus:
147
+ - `{output_column}`: The extracted text from each image
148
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
149
+
150
+ ## Usage
151
+
152
+ ```python
153
+ from datasets import load_dataset
154
+ import json
155
+
156
+ # Load the dataset
157
+ dataset = load_dataset("{{output_dataset_id}}", split="{split}")
158
+
159
+ # Access the extracted text
160
+ for example in dataset:
161
+ print(example["{output_column}"])
162
+ break
163
+
164
+ # View all OCR models applied to this dataset
165
+ inference_info = json.loads(dataset[0]["inference_info"])
166
+ for info in inference_info:
167
+ print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")
168
+ ```
169
+
170
+ ## Reproduction
171
+
172
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) RolmOCR script:
173
+
174
+ ```bash
175
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/rolm-ocr.py \\
176
+ {source_dataset} \\
177
+ <output-dataset> \\
178
+ --image-column {image_column} \\
179
+ --batch-size {batch_size} \\
180
+ --max-model-len {max_model_len} \\
181
+ --max-tokens {max_tokens} \\
182
+ --gpu-memory-utilization {gpu_memory_utilization}
183
+ ```
184
+
185
+ ## Performance
186
+
187
+ - **Processing Speed**: ~{num_samples / (float(processing_time.split()[0]) * 60):.1f} images/second
188
+ - **GPU Configuration**: vLLM with {gpu_memory_utilization:.0%} GPU memory utilization
189
+
190
+ Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
191
+ """
192
+
193
+
194
+ def main(
195
+ input_dataset: str,
196
+ output_dataset: str,
197
+ image_column: str = "image",
198
+ batch_size: int = 16,
199
+ model: str = "reducto/RolmOCR",
200
+ max_model_len: int = 16384,
201
+ max_tokens: int = 8192,
202
+ gpu_memory_utilization: float = 0.8,
203
+ hf_token: str = None,
204
+ split: str = "train",
205
+ max_samples: int = None,
206
+ private: bool = False,
207
+ output_column: str = None,
208
+ shuffle: bool = False,
209
+ seed: int = 42,
210
+ ):
211
+ """Process images from HF dataset through OCR model."""
212
+
213
+ # Check CUDA availability first
214
+ check_cuda_availability()
215
+
216
+ # Track processing start time
217
+ start_time = datetime.now()
218
+
219
+ # Enable HF_TRANSFER for faster downloads
220
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
221
+
222
+ # Login to HF if token provided
223
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
224
+ if HF_TOKEN:
225
+ login(token=HF_TOKEN)
226
+
227
+ # Load dataset
228
+ logger.info(f"Loading dataset: {input_dataset}")
229
+ dataset = load_dataset(input_dataset, split=split)
230
+
231
+ # Set output column name dynamically if not provided
232
+ if output_column is None:
233
+ # Extract model name from path (e.g., "reducto/RolmOCR" -> "rolmocr")
234
+ model_name = model.split("/")[-1].lower().replace("-", "_")
235
+ output_column = f"{model_name}_text"
236
+ logger.info(f"Using dynamic output column name: {output_column}")
237
+
238
+ # Validate image column
239
+ if image_column not in dataset.column_names:
240
+ raise ValueError(
241
+ f"Column '{image_column}' not found. Available: {dataset.column_names}"
242
+ )
243
+
244
+ # Shuffle if requested
245
+ if shuffle:
246
+ logger.info(f"Shuffling dataset with seed {seed}")
247
+ dataset = dataset.shuffle(seed=seed)
248
+
249
+ # Limit samples if requested
250
+ if max_samples:
251
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
252
+ logger.info(f"Limited to {len(dataset)} samples")
253
+
254
+ # Initialize vLLM
255
+ logger.info(f"Initializing vLLM with model: {model}")
256
+ llm = LLM(
257
+ model=model,
258
+ trust_remote_code=True,
259
+ max_model_len=max_model_len,
260
+ gpu_memory_utilization=gpu_memory_utilization,
261
+ limit_mm_per_prompt={"image": 1},
262
+ )
263
+
264
+ sampling_params = SamplingParams(
265
+ temperature=0.0, # Deterministic for OCR
266
+ max_tokens=max_tokens,
267
+ )
268
+
269
+ # Process images in batches
270
+ all_text = []
271
+
272
+ logger.info(f"Processing {len(dataset)} images in batches of {batch_size}")
273
+
274
+ # Process in batches to avoid memory issues
275
+ for batch_indices in tqdm(
276
+ partition_all(batch_size, range(len(dataset))),
277
+ total=(len(dataset) + batch_size - 1) // batch_size,
278
+ desc="OCR processing",
279
+ ):
280
+ batch_indices = list(batch_indices)
281
+ batch_images = [dataset[i][image_column] for i in batch_indices]
282
+
283
+ try:
284
+ # Create messages for batch
285
+ batch_messages = [make_ocr_message(img) for img in batch_images]
286
+
287
+ # Process with vLLM
288
+ outputs = llm.chat(batch_messages, sampling_params)
289
+
290
+ # Extract text from outputs
291
+ for output in outputs:
292
+ text = output.outputs[0].text.strip()
293
+ all_text.append(text)
294
+
295
+ except Exception as e:
296
+ logger.error(f"Error processing batch: {e}")
297
+ # Add error placeholders for failed batch
298
+ all_text.extend(["[OCR FAILED]"] * len(batch_images))
299
+
300
+ # Add text column to dataset
301
+ logger.info(f"Adding {output_column} column to dataset")
302
+ dataset = dataset.add_column(output_column, all_text)
303
+
304
+ # Handle inference_info tracking
305
+ logger.info("Updating inference_info...")
306
+
307
+ # Check for existing inference_info
308
+ if "inference_info" in dataset.column_names:
309
+ # Parse existing info from first row (all rows have same info)
310
+ try:
311
+ existing_info = json.loads(dataset[0]["inference_info"])
312
+ if not isinstance(existing_info, list):
313
+ existing_info = [existing_info] # Convert old format to list
314
+ except (json.JSONDecodeError, TypeError):
315
+ existing_info = []
316
+ # Remove old column to update it
317
+ dataset = dataset.remove_columns(["inference_info"])
318
+ else:
319
+ existing_info = []
320
+
321
+ # Add new inference info
322
+ new_info = {
323
+ "column_name": output_column,
324
+ "model_id": model,
325
+ "processing_date": datetime.now().isoformat(),
326
+ "batch_size": batch_size,
327
+ "max_tokens": max_tokens,
328
+ "gpu_memory_utilization": gpu_memory_utilization,
329
+ "max_model_len": max_model_len,
330
+ "script": "rolm-ocr.py",
331
+ "script_version": "1.0.0",
332
+ "script_url": "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/rolm-ocr.py"
333
+ }
334
+ existing_info.append(new_info)
335
+
336
+ # Add updated inference_info column
337
+ info_json = json.dumps(existing_info, ensure_ascii=False)
338
+ dataset = dataset.add_column("inference_info", [info_json] * len(dataset))
339
+
340
+ # Push to hub
341
+ logger.info(f"Pushing to {output_dataset}")
342
+ dataset.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
343
+
344
+ # Calculate processing time
345
+ end_time = datetime.now()
346
+ processing_duration = end_time - start_time
347
+ processing_time = f"{processing_duration.total_seconds() / 60:.1f} minutes"
348
+
349
+ # Create and push dataset card
350
+ logger.info("Creating dataset card...")
351
+ card_content = create_dataset_card(
352
+ source_dataset=input_dataset,
353
+ model=model,
354
+ num_samples=len(dataset),
355
+ processing_time=processing_time,
356
+ output_column=output_column,
357
+ batch_size=batch_size,
358
+ max_model_len=max_model_len,
359
+ max_tokens=max_tokens,
360
+ gpu_memory_utilization=gpu_memory_utilization,
361
+ image_column=image_column,
362
+ split=split,
363
+ )
364
+
365
+ card = DatasetCard(card_content)
366
+ card.push_to_hub(output_dataset, token=HF_TOKEN)
367
+ logger.info("✅ Dataset card created and pushed!")
368
+
369
+ logger.info("✅ OCR conversion complete!")
370
+ logger.info(
371
+ f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
372
+ )
373
+
374
+
375
+ if __name__ == "__main__":
376
+ # Show example usage if no arguments
377
+ if len(sys.argv) == 1:
378
+ print("=" * 80)
379
+ print("RolmOCR Document Text Extraction")
380
+ print("=" * 80)
381
+ print("\nThis script extracts plain text from document images using")
382
+ print("the RolmOCR model with vLLM acceleration.")
383
+ print("\nFeatures:")
384
+ print("- Fast and efficient text extraction")
385
+ print("- General-purpose document OCR")
386
+ print("- Based on Qwen2.5-VL-7B architecture")
387
+ print("- Optimized for batch processing")
388
+ print("\nExample usage:")
389
+ print("\n1. Basic OCR conversion:")
390
+ print(" uv run rolm-ocr.py document-images extracted-text")
391
+ print("\n2. With custom settings:")
392
+ print(" uv run rolm-ocr.py scanned-docs ocr-output \\")
393
+ print(" --image-column page \\")
394
+ print(" --batch-size 8 \\")
395
+ print(" --gpu-memory-utilization 0.9")
396
+ print("\n3. Process a subset for testing:")
397
+ print(" uv run rolm-ocr.py large-dataset test-output --max-samples 10")
398
+ print("\n4. Random sample from ordered dataset:")
399
+ print(" uv run rolm-ocr.py ordered-dataset random-test --max-samples 50 --shuffle")
400
+ print("\n5. Running on HF Jobs:")
401
+ print(" hf jobs uv run --flavor l4x1 \\")
402
+ print(" -e HF_TOKEN=$(python3 -c \"from huggingface_hub import get_token; print(get_token())\") \\")
403
+ print(
404
+ " https://huggingface.co/datasets/uv-scripts/ocr/raw/main/rolm-ocr.py \\"
405
+ )
406
+ print(" your-document-dataset \\")
407
+ print(" your-text-output")
408
+ print("\n" + "=" * 80)
409
+ print("\nFor full help, run: uv run rolm-ocr.py --help")
410
+ sys.exit(0)
411
+
412
+ parser = argparse.ArgumentParser(
413
+ description="OCR images to text using RolmOCR",
414
+ formatter_class=argparse.RawDescriptionHelpFormatter,
415
+ epilog="""
416
+ Examples:
417
+ # Basic usage
418
+ uv run rolm-ocr.py my-images-dataset ocr-results
419
+
420
+ # With specific image column
421
+ uv run rolm-ocr.py documents extracted-text --image-column scan
422
+
423
+ # Process subset for testing
424
+ uv run rolm-ocr.py large-dataset test-output --max-samples 100
425
+
426
+ # Random sample of 100 images
427
+ uv run rolm-ocr.py ordered-dataset random-sample --max-samples 100 --shuffle
428
+
429
+ # Custom output column name (default: rolmocr_text)
430
+ uv run rolm-ocr.py images texts --output-column ocr_text
431
+ """,
432
+ )
433
+
434
+ parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
435
+ parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
436
+ parser.add_argument(
437
+ "--image-column",
438
+ default="image",
439
+ help="Column containing images (default: image)",
440
+ )
441
+ parser.add_argument(
442
+ "--batch-size",
443
+ type=int,
444
+ default=16,
445
+ help="Batch size for processing (default: 16)",
446
+ )
447
+ parser.add_argument(
448
+ "--model",
449
+ default="reducto/RolmOCR",
450
+ help="Model to use (default: reducto/RolmOCR)",
451
+ )
452
+ parser.add_argument(
453
+ "--max-model-len",
454
+ type=int,
455
+ default=16384,
456
+ help="Maximum model context length (default: 16384)",
457
+ )
458
+ parser.add_argument(
459
+ "--max-tokens",
460
+ type=int,
461
+ default=8192,
462
+ help="Maximum tokens to generate (default: 8192)",
463
+ )
464
+ parser.add_argument(
465
+ "--gpu-memory-utilization",
466
+ type=float,
467
+ default=0.8,
468
+ help="GPU memory utilization (default: 0.8)",
469
+ )
470
+ parser.add_argument("--hf-token", help="Hugging Face API token")
471
+ parser.add_argument(
472
+ "--split", default="train", help="Dataset split to use (default: train)"
473
+ )
474
+ parser.add_argument(
475
+ "--max-samples",
476
+ type=int,
477
+ help="Maximum number of samples to process (for testing)",
478
+ )
479
+ parser.add_argument(
480
+ "--private", action="store_true", help="Make output dataset private"
481
+ )
482
+ parser.add_argument(
483
+ "--output-column",
484
+ default=None,
485
+ help="Name of the output column for extracted text (default: auto-generated from model name)",
486
+ )
487
+ parser.add_argument(
488
+ "--shuffle",
489
+ action="store_true",
490
+ help="Shuffle the dataset before processing (useful for random sampling)",
491
+ )
492
+ parser.add_argument(
493
+ "--seed",
494
+ type=int,
495
+ default=42,
496
+ help="Random seed for shuffling (default: 42)",
497
+ )
498
+
499
+ args = parser.parse_args()
500
+
501
+ main(
502
+ input_dataset=args.input_dataset,
503
+ output_dataset=args.output_dataset,
504
+ image_column=args.image_column,
505
+ batch_size=args.batch_size,
506
+ model=args.model,
507
+ max_model_len=args.max_model_len,
508
+ max_tokens=args.max_tokens,
509
+ gpu_memory_utilization=args.gpu_memory_utilization,
510
+ hf_token=args.hf_token,
511
+ split=args.split,
512
+ max_samples=args.max_samples,
513
+ private=args.private,
514
+ output_column=args.output_column,
515
+ shuffle=args.shuffle,
516
+ seed=args.seed,
517
+ )
smoldocling-ocr.py ADDED
@@ -0,0 +1,580 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub[hf_transfer]",
6
+ # "pillow",
7
+ # "vllm",
8
+ # "tqdm",
9
+ # "toolz",
10
+ # "torch", # Added for CUDA check
11
+ # "docling-core", # For DocTags conversion
12
+ # ]
13
+ #
14
+ # ///
15
+
16
+ """
17
+ Extract structured documents using SmolDocling-256M with vLLM.
18
+
19
+ This script processes images through the SmolDocling model to extract
20
+ structured document content with DocTags format, ideal for documents
21
+ with code, formulas, tables, and complex layouts.
22
+
23
+ Features:
24
+ - Ultra-compact 256M parameter model
25
+ - DocTags format for efficient representation
26
+ - Code block recognition with indentation
27
+ - Mathematical formula detection
28
+ - Table and chart extraction
29
+ - Layout preservation with bounding boxes
30
+ """
31
+
32
+ import argparse
33
+ import base64
34
+ import io
35
+ import json
36
+ import logging
37
+ import os
38
+ import re
39
+ import sys
40
+ from typing import Any, Dict, List, Union
41
+ from datetime import datetime
42
+
43
+ import torch
44
+ from datasets import load_dataset
45
+ from docling_core.types.doc import DoclingDocument
46
+ from docling_core.types.doc.document import DocTagsDocument
47
+ from huggingface_hub import DatasetCard, login
48
+ from PIL import Image
49
+ from toolz import partition_all
50
+ from tqdm.auto import tqdm
51
+ from vllm import LLM, SamplingParams
52
+
53
+ logging.basicConfig(level=logging.INFO)
54
+ logger = logging.getLogger(__name__)
55
+
56
+
57
+ def check_cuda_availability():
58
+ """Check if CUDA is available and exit if not."""
59
+ if not torch.cuda.is_available():
60
+ logger.error("CUDA is not available. This script requires a GPU.")
61
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
62
+ sys.exit(1)
63
+ else:
64
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
65
+
66
+
67
+ def prepare_llm_input(
68
+ image: Union[Image.Image, Dict[str, Any], str],
69
+ prompt_text: str = "Convert page to Docling.",
70
+ ) -> Dict:
71
+ """Prepare input for vLLM processing."""
72
+ # Convert to PIL Image if needed
73
+ if isinstance(image, Image.Image):
74
+ pil_img = image.convert("RGB")
75
+ elif isinstance(image, dict) and "bytes" in image:
76
+ pil_img = Image.open(io.BytesIO(image["bytes"])).convert("RGB")
77
+ elif isinstance(image, str):
78
+ pil_img = Image.open(image).convert("RGB")
79
+ else:
80
+ raise ValueError(f"Unsupported image type: {type(image)}")
81
+
82
+ # Create chat template - exact format from the example
83
+ chat_template = (
84
+ f"<|im_start|>User:<image>{prompt_text}<end_of_utterance>\nAssistant:"
85
+ )
86
+
87
+ # Return in the format expected by vLLM generate
88
+ return {"prompt": chat_template, "multi_modal_data": {"image": pil_img}}
89
+
90
+
91
+ def convert_doctags_to_markdown(doctags_output: str) -> str:
92
+ """Convert DocTags output to markdown format."""
93
+ # For now, just return the raw output as-is
94
+ # We'll focus on getting the basic vLLM inference working first
95
+ return doctags_output.strip()
96
+
97
+
98
+ def create_dataset_card(
99
+ source_dataset: str,
100
+ model: str,
101
+ num_samples: int,
102
+ processing_time: str,
103
+ output_column: str,
104
+ output_format: str,
105
+ batch_size: int,
106
+ max_model_len: int,
107
+ max_tokens: int,
108
+ gpu_memory_utilization: float,
109
+ image_column: str = "image",
110
+ split: str = "train",
111
+ ) -> str:
112
+ """Create a dataset card documenting the OCR process."""
113
+ model_name = model.split("/")[-1]
114
+
115
+ return f"""---
116
+ tags:
117
+ - ocr
118
+ - document-processing
119
+ - smoldocling
120
+ - doctags
121
+ - structured-extraction
122
+ - uv-script
123
+ - generated
124
+ ---
125
+
126
+ # Document Processing using {model_name}
127
+
128
+ This dataset contains structured document extraction from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using SmolDocling.
129
+
130
+ ## Processing Details
131
+
132
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
133
+ - **Model**: [{model}](https://huggingface.co/{model})
134
+ - **Number of Samples**: {num_samples:,}
135
+ - **Processing Time**: {processing_time}
136
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
137
+
138
+ ### Configuration
139
+
140
+ - **Image Column**: `{image_column}`
141
+ - **Output Column**: `{output_column}`
142
+ - **Output Format**: {output_format}
143
+ - **Dataset Split**: `{split}`
144
+ - **Batch Size**: {batch_size}
145
+ - **Max Model Length**: {max_model_len:,} tokens
146
+ - **Max Output Tokens**: {max_tokens:,}
147
+ - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
148
+
149
+ ## Model Information
150
+
151
+ SmolDocling-256M is an ultra-compact multimodal model that excels at:
152
+ - 💻 **Code Recognition** - Detects and formats code blocks with proper indentation
153
+ - 🔢 **Formula Recognition** - Identifies and processes mathematical expressions
154
+ - 📊 **Tables & Charts** - Extracts structured data from tables and charts
155
+ - 📐 **Layout Preservation** - Maintains document structure with bounding boxes
156
+ - 🏷️ **DocTags Format** - Efficient minimal representation for documents
157
+ - ⚡ **Fast Inference** - Only 256M parameters for quick processing
158
+
159
+ ## Dataset Structure
160
+
161
+ The dataset contains all original columns plus:
162
+ - `{output_column}`: The extracted {"DocTags JSON" if output_format == "doctags" else "markdown"} from each image
163
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
164
+
165
+ ## Usage
166
+
167
+ ```python
168
+ from datasets import load_dataset
169
+ import json
170
+ {"from docling_core.types.doc import DoclingDocument" if output_format == "doctags" else ""}
171
+ {"from docling_core.types.doc.document import DocTagsDocument" if output_format == "doctags" else ""}
172
+
173
+ # Load the dataset
174
+ dataset = load_dataset("{{output_dataset_id}}", split="{split}")
175
+
176
+ # Access the extracted content
177
+ for example in dataset:
178
+ {"# Parse DocTags and convert to desired format" if output_format == "doctags" else ""}
179
+ {f"doc_tags = DocTagsDocument.model_validate_json(example['{output_column}'])" if output_format == "doctags" else f"print(example['{output_column}'])"}
180
+ {"doc = DoclingDocument.from_doctags(doc_tags)" if output_format == "doctags" else ""}
181
+ {"print(doc.export(format='md').text) # Or 'html', 'json'" if output_format == "doctags" else ""}
182
+ break
183
+
184
+ # View all OCR models applied to this dataset
185
+ inference_info = json.loads(dataset[0]["inference_info"])
186
+ for info in inference_info:
187
+ print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")
188
+ ```
189
+
190
+ ## Reproduction
191
+
192
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) SmolDocling script:
193
+
194
+ ```bash
195
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/smoldocling-ocr.py \\
196
+ {source_dataset} \\
197
+ <output-dataset> \\
198
+ --image-column {image_column} \\
199
+ --output-format {output_format} \\
200
+ --batch-size {batch_size} \\
201
+ --max-model-len {max_model_len} \\
202
+ --max-tokens {max_tokens} \\
203
+ --gpu-memory-utilization {gpu_memory_utilization}
204
+ ```
205
+
206
+ ## Performance
207
+
208
+ - **Processing Speed**: ~{num_samples / (float(processing_time.split()[0]) * 60):.1f} images/second
209
+ - **Model Size**: 256M parameters (ultra-compact)
210
+ - **GPU Configuration**: vLLM with {gpu_memory_utilization:.0%} GPU memory utilization
211
+
212
+ Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
213
+ """
214
+
215
+
216
+ def main(
217
+ input_dataset: str,
218
+ output_dataset: str,
219
+ image_column: str = "image",
220
+ batch_size: int = 32,
221
+ model: str = "ds4sd/SmolDocling-256M-preview",
222
+ max_model_len: int = 8192,
223
+ max_tokens: int = 8192,
224
+ gpu_memory_utilization: float = 0.8,
225
+ hf_token: str = None,
226
+ split: str = "train",
227
+ max_samples: int = None,
228
+ private: bool = False,
229
+ output_column: str = None,
230
+ output_format: str = "markdown",
231
+ shuffle: bool = False,
232
+ seed: int = 42,
233
+ prompt: str = "Convert page to Docling.",
234
+ ):
235
+ """Process images from HF dataset through SmolDocling model."""
236
+
237
+ # Check CUDA availability first
238
+ check_cuda_availability()
239
+
240
+ # Track processing start time
241
+ start_time = datetime.now()
242
+
243
+ # Enable HF_TRANSFER for faster downloads
244
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
245
+
246
+ # Login to HF if token provided
247
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
248
+ if HF_TOKEN:
249
+ login(token=HF_TOKEN)
250
+
251
+ # Load dataset
252
+ logger.info(f"Loading dataset: {input_dataset}")
253
+ dataset = load_dataset(input_dataset, split=split)
254
+
255
+ # Set output column name dynamically if not provided
256
+ if output_column is None:
257
+ # Extract model name from path (e.g., "ds4sd/SmolDocling-256M-preview" -> "smoldocling")
258
+ model_name = model.split("/")[-1].split("-")[0].lower()
259
+ output_column = f"{model_name}_text"
260
+ logger.info(f"Using dynamic output column name: {output_column}")
261
+
262
+ # Validate image column
263
+ if image_column not in dataset.column_names:
264
+ raise ValueError(
265
+ f"Column '{image_column}' not found. Available: {dataset.column_names}"
266
+ )
267
+
268
+ # Validate output format
269
+ if output_format not in ["markdown", "doctags"]:
270
+ raise ValueError(
271
+ f"Invalid output format '{output_format}'. Must be 'markdown' or 'doctags'"
272
+ )
273
+
274
+ # Shuffle if requested
275
+ if shuffle:
276
+ logger.info(f"Shuffling dataset with seed {seed}")
277
+ dataset = dataset.shuffle(seed=seed)
278
+
279
+ # Limit samples if requested
280
+ if max_samples:
281
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
282
+ logger.info(f"Limited to {len(dataset)} samples")
283
+
284
+ # Initialize vLLM
285
+ logger.info(f"Initializing vLLM with model: {model}")
286
+ llm = LLM(
287
+ model=model,
288
+ trust_remote_code=True,
289
+ max_model_len=max_model_len,
290
+ gpu_memory_utilization=gpu_memory_utilization,
291
+ limit_mm_per_prompt={"image": 1},
292
+ )
293
+
294
+ sampling_params = SamplingParams(
295
+ temperature=0.0, # Deterministic for OCR
296
+ max_tokens=max_tokens,
297
+ )
298
+
299
+ # Process images in batches
300
+ all_output = []
301
+
302
+ logger.info(f"Processing {len(dataset)} images in batches of {batch_size}")
303
+ logger.info(f"Output format: {output_format}")
304
+
305
+ # Process in batches to avoid memory issues
306
+ for batch_indices in tqdm(
307
+ partition_all(batch_size, range(len(dataset))),
308
+ total=(len(dataset) + batch_size - 1) // batch_size,
309
+ desc="OCR processing",
310
+ ):
311
+ batch_indices = list(batch_indices)
312
+ batch_images = [dataset[i][image_column] for i in batch_indices]
313
+
314
+ try:
315
+ # Prepare inputs for batch
316
+ batch_inputs = [prepare_llm_input(img, prompt) for img in batch_images]
317
+
318
+ # Process with vLLM using generate
319
+ outputs = llm.generate(batch_inputs, sampling_params=sampling_params)
320
+
321
+ # Extract text from outputs
322
+ for i, output in enumerate(outputs):
323
+ raw_output = output.outputs[0].text.strip()
324
+
325
+ # Convert to markdown if requested
326
+ if output_format == "markdown":
327
+ processed_output = convert_doctags_to_markdown(raw_output)
328
+ else:
329
+ processed_output = raw_output
330
+
331
+ all_output.append(processed_output)
332
+
333
+ except Exception as e:
334
+ logger.error(f"Error processing batch: {e}")
335
+ # Add error placeholders for failed batch
336
+ all_output.extend(["[OCR FAILED]"] * len(batch_images))
337
+
338
+ # Add output column to dataset
339
+ logger.info(f"Adding {output_column} column to dataset")
340
+ dataset = dataset.add_column(output_column, all_output)
341
+
342
+ # Handle inference_info tracking
343
+ logger.info("Updating inference_info...")
344
+
345
+ # Check for existing inference_info
346
+ if "inference_info" in dataset.column_names:
347
+ # Parse existing info from first row (all rows have same info)
348
+ try:
349
+ existing_info = json.loads(dataset[0]["inference_info"])
350
+ if not isinstance(existing_info, list):
351
+ existing_info = [existing_info] # Convert old format to list
352
+ except (json.JSONDecodeError, TypeError):
353
+ existing_info = []
354
+ # Remove old column to update it
355
+ dataset = dataset.remove_columns(["inference_info"])
356
+ else:
357
+ existing_info = []
358
+
359
+ # Add new inference info
360
+ new_info = {
361
+ "column_name": output_column,
362
+ "model_id": model,
363
+ "processing_date": datetime.now().isoformat(),
364
+ "batch_size": batch_size,
365
+ "max_tokens": max_tokens,
366
+ "gpu_memory_utilization": gpu_memory_utilization,
367
+ "max_model_len": max_model_len,
368
+ "output_format": output_format,
369
+ "prompt": prompt,
370
+ "script": "smoldocling-ocr.py",
371
+ "script_version": "1.0.0",
372
+ "script_url": "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/smoldocling-ocr.py",
373
+ }
374
+ existing_info.append(new_info)
375
+
376
+ # Add updated inference_info column
377
+ info_json = json.dumps(existing_info, ensure_ascii=False)
378
+ dataset = dataset.add_column("inference_info", [info_json] * len(dataset))
379
+
380
+ # Push to hub
381
+ logger.info(f"Pushing to {output_dataset}")
382
+ dataset.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
383
+
384
+ # Calculate processing time
385
+ end_time = datetime.now()
386
+ processing_duration = end_time - start_time
387
+ processing_time = f"{processing_duration.total_seconds() / 60:.1f} minutes"
388
+
389
+ # Create and push dataset card
390
+ logger.info("Creating dataset card...")
391
+ card_content = create_dataset_card(
392
+ source_dataset=input_dataset,
393
+ model=model,
394
+ num_samples=len(dataset),
395
+ processing_time=processing_time,
396
+ output_column=output_column,
397
+ output_format=output_format,
398
+ batch_size=batch_size,
399
+ max_model_len=max_model_len,
400
+ max_tokens=max_tokens,
401
+ gpu_memory_utilization=gpu_memory_utilization,
402
+ image_column=image_column,
403
+ split=split,
404
+ )
405
+
406
+ card = DatasetCard(card_content)
407
+ card.push_to_hub(output_dataset, token=HF_TOKEN)
408
+ logger.info("✅ Dataset card created and pushed!")
409
+
410
+ logger.info("✅ OCR conversion complete!")
411
+ logger.info(
412
+ f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
413
+ )
414
+
415
+
416
+ if __name__ == "__main__":
417
+ # Show example usage if no arguments
418
+ if len(sys.argv) == 1:
419
+ print("=" * 80)
420
+ print("SmolDocling Ultra-Compact Document Processing")
421
+ print("=" * 80)
422
+ print("\nThis script extracts structured document content using")
423
+ print("the SmolDocling-256M model with vLLM acceleration.")
424
+ print("\nFeatures:")
425
+ print("- Ultra-compact 256M parameter model")
426
+ print("- DocTags format for efficient representation")
427
+ print("- Code block recognition with indentation")
428
+ print("- Mathematical formula detection")
429
+ print("- Table and chart extraction")
430
+ print("- Layout preservation with bounding boxes")
431
+ print("\nExample usage:")
432
+ print("\n1. Basic document conversion to markdown:")
433
+ print(" uv run smoldocling-ocr.py document-images extracted-docs")
434
+ print("\n2. Extract with DocTags format:")
435
+ print(" uv run smoldocling-ocr.py scientific-papers doc-analysis \\")
436
+ print(" --output-format doctags")
437
+ print("\n3. Custom settings:")
438
+ print(" uv run smoldocling-ocr.py code-docs structured-output \\")
439
+ print(" --image-column page \\")
440
+ print(" --batch-size 64 \\")
441
+ print(" --gpu-memory-utilization 0.9")
442
+ print("\n4. Process a subset for testing:")
443
+ print(" uv run smoldocling-ocr.py large-dataset test-output --max-samples 10")
444
+ print("\n5. Random sample from ordered dataset:")
445
+ print(
446
+ " uv run smoldocling-ocr.py ordered-dataset random-test --max-samples 50 --shuffle"
447
+ )
448
+ print("\n6. Running on HF Jobs:")
449
+ print(" hf jobs uv run --flavor l4x1 \\")
450
+ print(
451
+ ' -e HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \\'
452
+ )
453
+ print(
454
+ " https://huggingface.co/datasets/uv-scripts/ocr/raw/main/smoldocling-ocr.py \\"
455
+ )
456
+ print(" your-document-dataset \\")
457
+ print(" your-structured-output")
458
+ print("\n" + "=" * 80)
459
+ print("\nFor full help, run: uv run smoldocling-ocr.py --help")
460
+ sys.exit(0)
461
+
462
+ parser = argparse.ArgumentParser(
463
+ description="Extract structured documents using SmolDocling",
464
+ formatter_class=argparse.RawDescriptionHelpFormatter,
465
+ epilog="""
466
+ Examples:
467
+ # Basic usage
468
+ uv run smoldocling-ocr.py my-images-dataset structured-output
469
+
470
+ # With DocTags format output
471
+ uv run smoldocling-ocr.py documents doc-analysis --output-format doctags
472
+
473
+ # Process subset for testing
474
+ uv run smoldocling-ocr.py large-dataset test-output --max-samples 100
475
+
476
+ # Random sample of 100 images
477
+ uv run smoldocling-ocr.py ordered-dataset random-sample --max-samples 100 --shuffle
478
+
479
+ # Custom output column name (default: smoldocling_text)
480
+ uv run smoldocling-ocr.py images texts --output-column extracted_content
481
+ """,
482
+ )
483
+
484
+ parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
485
+ parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
486
+ parser.add_argument(
487
+ "--image-column",
488
+ default="image",
489
+ help="Column containing images (default: image)",
490
+ )
491
+ parser.add_argument(
492
+ "--batch-size",
493
+ type=int,
494
+ default=32,
495
+ help="Batch size for processing (default: 32)",
496
+ )
497
+ parser.add_argument(
498
+ "--model",
499
+ default="ds4sd/SmolDocling-256M-preview",
500
+ help="Model to use (default: ds4sd/SmolDocling-256M-preview)",
501
+ )
502
+ parser.add_argument(
503
+ "--max-model-len",
504
+ type=int,
505
+ default=8192,
506
+ help="Maximum model context length (default: 8192)",
507
+ )
508
+ parser.add_argument(
509
+ "--max-tokens",
510
+ type=int,
511
+ default=8192,
512
+ help="Maximum tokens to generate (default: 8192)",
513
+ )
514
+ parser.add_argument(
515
+ "--gpu-memory-utilization",
516
+ type=float,
517
+ default=0.8,
518
+ help="GPU memory utilization (default: 0.8)",
519
+ )
520
+ parser.add_argument("--hf-token", help="Hugging Face API token")
521
+ parser.add_argument(
522
+ "--split", default="train", help="Dataset split to use (default: train)"
523
+ )
524
+ parser.add_argument(
525
+ "--max-samples",
526
+ type=int,
527
+ help="Maximum number of samples to process (for testing)",
528
+ )
529
+ parser.add_argument(
530
+ "--private", action="store_true", help="Make output dataset private"
531
+ )
532
+ parser.add_argument(
533
+ "--output-column",
534
+ default=None,
535
+ help="Name of the output column for extracted text (default: auto-generated from model name)",
536
+ )
537
+ parser.add_argument(
538
+ "--output-format",
539
+ default="markdown",
540
+ choices=["markdown", "doctags"],
541
+ help="Output format: 'markdown' or 'doctags' (default: markdown)",
542
+ )
543
+ parser.add_argument(
544
+ "--shuffle",
545
+ action="store_true",
546
+ help="Shuffle the dataset before processing (useful for random sampling)",
547
+ )
548
+ parser.add_argument(
549
+ "--seed",
550
+ type=int,
551
+ default=42,
552
+ help="Random seed for shuffling (default: 42)",
553
+ )
554
+ parser.add_argument(
555
+ "--prompt",
556
+ default="Convert page to Docling.",
557
+ help="Custom prompt for the model (default: 'Convert page to Docling.')",
558
+ )
559
+
560
+ args = parser.parse_args()
561
+
562
+ main(
563
+ input_dataset=args.input_dataset,
564
+ output_dataset=args.output_dataset,
565
+ image_column=args.image_column,
566
+ batch_size=args.batch_size,
567
+ model=args.model,
568
+ max_model_len=args.max_model_len,
569
+ max_tokens=args.max_tokens,
570
+ gpu_memory_utilization=args.gpu_memory_utilization,
571
+ hf_token=args.hf_token,
572
+ split=args.split,
573
+ max_samples=args.max_samples,
574
+ private=args.private,
575
+ output_column=args.output_column,
576
+ output_format=args.output_format,
577
+ shuffle=args.shuffle,
578
+ seed=args.seed,
579
+ prompt=args.prompt,
580
+ )