gorjanradevski commited on
Commit
fd4d94a
·
verified ·
1 Parent(s): e9737ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +157 -4
README.md CHANGED
@@ -64,7 +64,159 @@ Answer only with the letter corresponding to the choice."""
64
  # Load the video and perform inference with any model that can process audio and video input
65
  # or, if you want to visually see the video and the prompt that could be provided to the model:
66
  # print(prompt)
67
- # display(Video(sample["video_with_overlayed_audio_path"], embed=True))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  ```
69
 
70
  ## Dataset Details
@@ -83,6 +235,7 @@ DAVE (Diagnostic Audio-Visual Evaluation) is a benchmark dataset designed to sys
83
 
84
  - **Repository:** https://github.com/gorjanradevski/dave
85
  - **Paper:** https://arxiv.org/abs/2503.09321
 
86
 
87
  ## Uses
88
 
@@ -134,9 +287,9 @@ Each example contains the following fields:
134
  * **temporal_ordering**: Temporal ordering of events task
135
  - `choices`: List of 4 action descriptions
136
  - `ground_truth`: List of letter strings representing correct order (e.g., `['(D)', '(A)', '(B)', '(C)']`)
137
- * **action_recognition**: Multi-label action recognition task
138
  - `choices`: List of 4 action descriptions
139
- - `ground_truth`: List of integer indices for correct actions (e.g., `[3]` or `[0, 2, 3]`)
140
  * **audio_classification**: Audio classification task
141
  - `choices`: List of 4 audio class labels
142
  - `ground_truth`: Integer index of the correct audio class (0-3)
@@ -169,6 +322,6 @@ url={https://openreview.net/forum?id=4ZAX1NT0ms}
169
  }
170
  ```
171
 
172
- ## Dataset Card Contact
173
 
174
  Reach out to either Gorjan at firstname.lastname@gmail.com or Teodora at: firstname.lastname@kuleuven.be
 
64
  # Load the video and perform inference with any model that can process audio and video input
65
  # or, if you want to visually see the video and the prompt that could be provided to the model:
66
  # print(prompt)
67
+ # display(Video(sample["video_with_overlayed_audio_path"], embed=True))
68
+ ```
69
+
70
+ ### Working with All Evaluation Tasks
71
+
72
+ DAVE provides 7 different evaluation tasks to diagnose model capabilities across different modalities. Here's how to access all tasks:
73
+
74
+ ```python
75
+ from datasets import load_dataset
76
+ import random
77
+
78
+ epic_dataset = load_dataset("gorjanradevski/dave", split="epic", keep_in_memory=True, trust_remote_code=True)
79
+ sample = random.choice(epic_dataset)
80
+
81
+ # Common information across all tasks
82
+ sound_effect = sample["audio_class"]
83
+ video_with_audio = sample["video_with_overlayed_audio_path"]
84
+ silent_video = sample["silent_video_path"]
85
+ audio_path = sample["overlayed_audio_path"]
86
+ choice_metadata = sample["choice_metadata"]
87
+
88
+ # ====== Task 1: Audio-Visual Alignment ======
89
+ # Find which action matches when the sound is heard
90
+ task = "audio_visual_alignment"
91
+ options = choice_metadata[task]["choices"]
92
+ ground_truth_idx = choice_metadata[task]["ground_truth"]
93
+
94
+ prompt = f"""What is the person in the video doing when {sound_effect} is heard?
95
+
96
+ (A) {options[0]}
97
+ (B) {options[1]}
98
+ (C) {options[2]}
99
+ (D) {options[3]}
100
+ (E) {options[4]}
101
+
102
+ Answer only with the letter."""
103
+
104
+ # ====== Task 2: Visual Only ======
105
+ # Identify the action during the overlayed event using only visual information
106
+ task = "visual_only"
107
+ options = choice_metadata[task]["choices"]
108
+ ground_truth_idx = choice_metadata[task]["ground_truth"]
109
+
110
+ prompt = f"""What action is happening during the highlighted segment?
111
+
112
+ (A) {options[0]}
113
+ (B) {options[1]}
114
+ (C) {options[2]}
115
+ (D) {options[3]}
116
+ (E) {options[4]}
117
+
118
+ Answer only with the letter."""
119
+ # Use silent_video for this task
120
+
121
+ # ====== Task 3: Audio Only ======
122
+ # Identify the action using only audio information
123
+ task = "audio_only"
124
+ options = choice_metadata[task]["choices"]
125
+ ground_truth_idx = choice_metadata[task]["ground_truth"]
126
+
127
+ prompt = f"""What action is associated with the sound you hear?
128
+
129
+ (A) {options[0]}
130
+ (B) {options[1]}
131
+ (C) {options[2]}
132
+ (D) {options[3]}
133
+ (E) {options[4]}
134
+
135
+ Answer only with the letter."""
136
+ # Use audio_path for this task
137
+
138
+ # ====== Task 4: Text Only ======
139
+ # Identify the action using only textual descriptions
140
+ task = "text_only"
141
+ options = choice_metadata[task]["choices"]
142
+ ground_truth_idx = choice_metadata[task]["ground_truth"]
143
+
144
+ # Get all event descriptions for context
145
+ event_descriptions = [event["narration"] for event in sample["events"]]
146
+
147
+ prompt = f"""Given these event descriptions: {', '.join(event_descriptions)}
148
+ Which action is most likely associated with the sound '{sound_effect}'?
149
+
150
+ (A) {options[0]}
151
+ (B) {options[1]}
152
+ (C) {options[2]}
153
+ (D) {options[3]}
154
+ (E) {options[4]}
155
+
156
+ Answer only with the letter."""
157
+
158
+ # ====== Task 5: Temporal Ordering ======
159
+ # Order the events chronologically
160
+ task = "temporal_ordering"
161
+ options = choice_metadata[task]["choices"]
162
+ ground_truth_order = choice_metadata[task]["ground_truth"] # e.g., ['(D)', '(A)', '(B)', '(C)']
163
+
164
+ prompt = f"""Order these events chronologically:
165
+
166
+ (A) {options[0]}
167
+ (B) {options[1]}
168
+ (C) {options[2]}
169
+ (D) {options[3]}
170
+
171
+ Provide the correct temporal order as a list like ['(A)', '(B)', '(C)', '(D)']."""
172
+
173
+ # ====== Task 6: Action Recognition ======
174
+ # Identify which action occurs during the overlayed segment
175
+ task = "action_recognition"
176
+ options = choice_metadata[task]["choices"]
177
+ ground_truth_indices = choice_metadata[task]["ground_truth"] # Single-element list, e.g., [3]
178
+ ground_truth_idx = ground_truth_indices[0]
179
+
180
+ prompt = f"""Which action occurs during the overlayed audio segment?
181
+
182
+ (A) {options[0]}
183
+ (B) {options[1]}
184
+ (C) {options[2]}
185
+ (D) {options[3]}
186
+
187
+ Answer only with the letter."""
188
+
189
+ # ====== Task 7: Audio Classification ======
190
+ # Identify the overlayed sound
191
+ task = "audio_classification"
192
+ options = choice_metadata[task]["choices"]
193
+ ground_truth_idx = choice_metadata[task]["ground_truth"]
194
+
195
+ prompt = f"""What sound is overlayed on the video?
196
+
197
+ (A) {options[0]}
198
+ (B) {options[1]}
199
+ (C) {options[2]}
200
+ (D) {options[3]}
201
+
202
+ Answer only with the letter."""
203
+
204
+ # ====== Iterate through all tasks programmatically ======
205
+ for task_name, task_data in choice_metadata.items():
206
+ print(f"\nTask: {task_name}")
207
+ print(f"Choices: {task_data['choices']}")
208
+ print(f"Ground Truth: {task_data['ground_truth']}")
209
+
210
+ # Handle different ground truth formats
211
+ if task_name == "temporal_ordering":
212
+ print(f"Correct order: {task_data['ground_truth']}")
213
+ elif task_name == "action_recognition":
214
+ answer_idx = task_data['ground_truth'][0]
215
+ print(f"Answer: {task_data['choices'][answer_idx]}")
216
+ else:
217
+ # Single choice tasks (audio_visual_alignment, visual_only, audio_only, text_only, audio_classification)
218
+ answer = task_data['choices'][task_data['ground_truth']]
219
+ print(f"Answer: {answer}")
220
  ```
221
 
222
  ## Dataset Details
 
235
 
236
  - **Repository:** https://github.com/gorjanradevski/dave
237
  - **Paper:** https://arxiv.org/abs/2503.09321
238
+ - **Paper (PDF):** https://openreview.net/pdf/c7b7b8187b0925a4100a65f483ef3dff0215ab34.pdf
239
 
240
  ## Uses
241
 
 
287
  * **temporal_ordering**: Temporal ordering of events task
288
  - `choices`: List of 4 action descriptions
289
  - `ground_truth`: List of letter strings representing correct order (e.g., `['(D)', '(A)', '(B)', '(C)']`)
290
+ * **action_recognition**: Single-label action recognition task
291
  - `choices`: List of 4 action descriptions
292
+ - `ground_truth`: Single-element list containing the index of the correct action (e.g., `[3]`)
293
  * **audio_classification**: Audio classification task
294
  - `choices`: List of 4 audio class labels
295
  - `ground_truth`: Integer index of the correct audio class (0-3)
 
322
  }
323
  ```
324
 
325
+ ## Contact
326
 
327
  Reach out to either Gorjan at firstname.lastname@gmail.com or Teodora at: firstname.lastname@kuleuven.be