dataset_info:
features:
- name: id
dtype: int64
- name: type
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 953308
num_examples: 1737
download_size: 168993
dataset_size: 953308
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc
RiddleBench
Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of natural language understanding and generation tasks.
However, their proficiency in complex logical and deductive reasoning remains a critical area of investigation.
We introduce RiddleBench, a meticulously curated benchmark of 1,737 challenging puzzles designed to test diverse reasoning skills beyond simple pattern matching.
Unlike conventional QA datasets that often rely on factual recall or surface-level cues, RiddleBench focuses on non-trivial reasoning by presenting problems such as coding–decoding, seating arrangements, sequence prediction, and blood relation analysis.
By evaluating models on RiddleBench, researchers can gain deeper insights into their ability to handle abstract reasoning, commonsense inference, and structured problem solving — skills essential for robust and trustworthy AI systems.
Dataset Structure
Each entry in the dataset consists of the following fields:
id: Unique identifier for the riddle (1–1737)type: Category/type of riddlequestion: The riddle textanswer: The ground-truth answer
The dataset can be directly loaded via Hugging Face Datasets.
Type Distribution
The dataset covers four major categories of riddles:
| Type | Count |
|---|---|
| Sequence Task | 1037 |
| Coding and Decoding Sum | 432 |
| Blood Relations | 146 |
| Seating Task | 122 |
| Total | 1737 |
Example Entry
{
"id": 1051,
"type": "coding and decoding sum",
"question": "If 'CARING' is coded as 'EDVGKC', and 'SHARES' is coded as 'UKEPBO', then how will 'CASKET' be coded as in the same code? a) EDXIBP c) EDWPAI b) EDWIAP d) EDWIBP",
"answer": "d"
}
Loading the Dataset
You can load the dataset directly from Hugging Face:
from datasets import load_dataset
dataset = load_dataset("ai4bharat/RiddleBench")
print(dataset)
# Example access
print(dataset["train"][0])
Load Datasets by Type
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("ai4bharat/RiddleBench")["train"]
# Function to filter riddles by type
def get_riddles_by_type(riddle_type: str, n: int = 5):
"""
Returns up to n riddles of the given type.
Args:
riddle_type (str): The type of riddles to filter (e.g., 'sequence task').
n (int): Number of riddles to return (default = 5).
"""
filtered = [ex for ex in dataset if ex["type"].lower() == riddle_type.lower()]
return filtered[:n]
# Example usage
coding_riddles = get_riddles_by_type("coding and decoding sum", n=3)
seating_riddles = get_riddles_by_type("seating task", n=3)
print("Coding & Decoding Sum Examples:")
for r in coding_riddles:
print(f"Q: {r['question']} | A: {r['answer']}")
print("\nSeating Task Examples:")
for r in seating_riddles:
print(f"Q: {r['question']} | A: {r['answer']}")
Intended Use
RiddleBench is designed solely as a benchmark to evaluate model reasoning abilities. It should not be used for training or fine-tuning models intended for deployment in real-world applications.
Citation
If you use RiddleBench in your research, please cite the following:
@misc{riddlebench2025,
title = {RiddleBench: A New General Inference and Reasoning Assessment in LLMs},
author = {Deepon Halder, Alan Saji, Raj Dabre, Ratish Puduppully, Anoop Kunchukuttan},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/ai4bharat/RiddleBench}}
}
License
This dataset is released under the CC0 License. You are free to use, modify, and distribute this dataset with proper attribution.
Contact
For questions, collaborations, or contributions, please reach out to the maintainers:
- Email 1 : [email protected]
- Email 2 : [email protected]