File size: 4,562 Bytes
41e87b8 937ecfc 41e87b8 937ecfc 41e87b8 879ed5a 41e87b8 4e63b17 2187e73 4e63b17 80696d3 4e63b17 57da99b 4e63b17 0421aa4 4e63b17 879ed5a 4e63b17 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 |
---
dataset_info:
features:
- name: id
dtype: int64
- name: type
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 953308
num_examples: 1737
download_size: 168993
dataset_size: 953308
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc
---
# RiddleBench
<div style="display: flex; gap: 10px;">
<a href="https://www.arxiv.org/abs/2510.24932">
<img src="https://img.shields.io/badge/arXiv-2510.24932-B31B1B" alt="arXiv">
</a>
</div>
Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of natural language understanding and generation tasks.
However, their proficiency in complex logical and deductive reasoning remains a critical area of investigation.
We introduce **RiddleBench**, a meticulously curated benchmark of 1,737 challenging puzzles designed to test diverse reasoning skills beyond simple pattern matching.
Unlike conventional QA datasets that often rely on factual recall or surface-level cues, RiddleBench focuses on non-trivial reasoning by presenting problems such as coding–decoding, seating arrangements, sequence prediction, and blood relation analysis.
By evaluating models on RiddleBench, researchers can gain deeper insights into their ability to handle abstract reasoning, commonsense inference, and structured problem solving — skills essential for robust and trustworthy AI systems.
---
## Dataset Structure
Each entry in the dataset consists of the following fields:
- `id`: Unique identifier for the riddle (1–1737)
- `type`: Category/type of riddle
- `question`: The riddle text
- `answer`: The ground-truth answer
The dataset can be directly loaded via Hugging Face Datasets.
---
## Type Distribution
The dataset covers four major categories of riddles:
| Type | Count |
|-------------------------|-------|
| Sequence Task | 1037 |
| Coding and Decoding Sum | 432 |
| Blood Relations | 146 |
| Seating Task | 122 |
| **Total** | 1737 |
---
## Example Entry
```json
{
"id": 1051,
"type": "coding and decoding sum",
"question": "If 'CARING' is coded as 'EDVGKC', and 'SHARES' is coded as 'UKEPBO', then how will 'CASKET' be coded as in the same code? a) EDXIBP c) EDWPAI b) EDWIAP d) EDWIBP",
"answer": "d"
}
```
## Loading the Dataset
You can load the dataset directly from Hugging Face:
```python
from datasets import load_dataset
dataset = load_dataset("ai4bharat/RiddleBench")
print(dataset)
# Example access
print(dataset["train"][0])
```
## Load Datasets by Type
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("ai4bharat/RiddleBench")["train"]
# Function to filter riddles by type
def get_riddles_by_type(riddle_type: str, n: int = 5):
"""
Returns up to n riddles of the given type.
Args:
riddle_type (str): The type of riddles to filter (e.g., 'sequence task').
n (int): Number of riddles to return (default = 5).
"""
filtered = [ex for ex in dataset if ex["type"].lower() == riddle_type.lower()]
return filtered[:n]
# Example usage
coding_riddles = get_riddles_by_type("coding and decoding sum", n=3)
seating_riddles = get_riddles_by_type("seating task", n=3)
print("Coding & Decoding Sum Examples:")
for r in coding_riddles:
print(f"Q: {r['question']} | A: {r['answer']}")
print("\nSeating Task Examples:")
for r in seating_riddles:
print(f"Q: {r['question']} | A: {r['answer']}")
```
## Intended Use
RiddleBench is designed solely as a benchmark to evaluate model reasoning abilities.
It should not be used for training or fine-tuning models intended for deployment in real-world applications.
## Citation
If you use RiddleBench in your research, please cite the following:
```
@misc{riddlebench2025,
title = {RiddleBench: A New General Inference and Reasoning Assessment in LLMs},
author = {Deepon Halder, Alan Saji, Raj Dabre, Ratish Puduppully, Anoop Kunchukuttan},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/ai4bharat/RiddleBench}}
}
```
## License
This dataset is released under the **CC0 License**.
You are free to use, modify, and distribute this dataset with proper attribution.
## Contact
For questions, collaborations, or contributions, please reach out to the maintainers:
- Email 1 : deeponh.2004@gmail.com
- Email 2 : prajdabre@gmail.com |