--- license: mit configs: - config_name: default data_files: - split: Epic path: epic.json - split: Ego4D path: ego4d.json task_categories: - visual-question-answering - video-text-to-text language: - en pretty_name: dave size_categories: - 1K The DAVE dataset consists of two main splits: `ego4d` and `epic`, each corresponding to curated samples from the Ego4D and EPIC-KITCHENS datasets respectively. Every example is structured to facilitate diagnostic evaluation of audio-visual models across multiple axes: visual, audio, temporal, and multimodal reasoning. ### Data Fields Each example contains the following fields: * **compressed_video_path**: Path to a compressed version of the raw video: unedited video containing 4 events with the original audio. * **overlayed_event_index**: Index of the event which we overlay with an unrelated audio sound (0-indexed, corresponds to position in `events` list). * **events**: List of dictionaries containing metadata about the events in the video: * `start`, `end`: Timestamps in format `"HH:MM:SS.ffffff"`. * `duration`: Duration in seconds (float). * `narration`: Natural language descriptions of the action. * `action`: Structured action annotations. * `raw_narration`: Original narration text. * **event_video_path**: Clip extracted from the overlayed event. * **audio_class**: The audio class overlaid in this instance (e.g., `"crow"`, `"dog"`, `"car horn"`). * **video_with_overlayed_audio_path**: Path to the video with audio overlayed on the specified event. * **silent_video_path**: Path to the video without any audio. * **overlayed_audio_path**: Path to the standalone audio clip extracted from the video with the overlayed audio. * **video_id**: Identifier for the video. * **participant_id**: Identifier for the subject or participant (present in EPIC-KITCHENS split, `None` in Ego4D split). * **type**: Video type or category (e.g., `"regular"`, `"none_of_the_above_incorrect_audio"`, `"none_of_the_above_no_sound"`), indicating the type of sample. * **choice_metadata**: Dictionary containing multiple-choice evaluation tasks with the following structure: * **audio_visual_alignment**: Audio-visual synchronization task - `choices`: List of 5 action descriptions (4 events + "none of the above") - `ground_truth`: Integer index of the correct choice (0-4) * **visual_only**: Visual-only action recognition task - `choices`: List of 5 action descriptions (4 events + "none of the above") - `ground_truth`: Integer index of the correct choice (0-4) * **audio_only**: Audio-only action recognition task - `choices`: List of 5 action descriptions (4 events + "none of the above") - `ground_truth`: Integer index of the correct choice (0-4) * **text_only**: Text-based reasoning task - `choices`: List of 5 action descriptions (4 events + "none of the above") - `ground_truth`: Integer index of the correct choice (0-4) * **temporal_ordering**: Temporal ordering of events task - `choices`: List of 4 action descriptions - `ground_truth`: List of letter strings representing correct order (e.g., `['(D)', '(A)', '(B)', '(C)']`) * **action_recognition**: Single-label action recognition task - `choices`: List of 4 action descriptions - `ground_truth`: Single-element list containing the index of the correct action (e.g., `[3]`) * **audio_classification**: Audio classification task - `choices`: List of 4 audio class labels - `ground_truth`: Integer index of the correct audio class (0-3) ### Splits * **epic**: Samples sourced and annotated from EPIC-KITCHENS. * **ego4d**: Samples sourced and annotated from Ego4D. Each split is structured identically in terms of fields, allowing for consistent benchmarking across domains. ## Bias, Risks, and Limitations Since our dataset is built on top of the Epic Kitchens and the Ego4D dataset, we inherit all risks associated with these two datasets. ## Citation ```bibtex @inproceedings{ radevski2025dave, title={{DAVE}: Diagnostic benchmark for Audio Visual Evaluation}, author={Gorjan Radevski and Teodora Popordanoska and Matthew B. Blaschko and Tinne Tuytelaars}, booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2025}, url={https://openreview.net/forum?id=4ZAX1NT0ms} } ``` ## Contact Reach out to either Gorjan at firstname.lastname@gmail.com or Teodora at: firstname.lastname@kuleuven.be