Maddsmr_shortclip912.mp4
The study provides a benchmark for understanding the neural mechanisms of visual event understanding , bridging the gap between static image perception and long-form movie analysis.
To help you find more specific details, are you looking for the of the video clips (like frame rate or resolution) or the fMRI processing pipeline used in the paper?
Human-written sentence descriptions of the videos correlate more strongly with brain activity than simple labels like "object" or "action". maddsmr_shortclip912.mp4
Video-evoked responses are reliably mapped across occipital, temporal, and parietal cortices.
The study identifies specific brain regions in the parietal and high-level visual cortex that correlate with how memorable a video clip is. 🎥 Related Resources The study provides a benchmark for understanding the
Read the full paper on Nature Communications.
Data and pre-trained models (like the TSM ResNet50 used in the study) are available on GitHub . Data and pre-trained models (like the TSM ResNet50
The dataset contains 1,102 three-second naturalistic videos sampled from the Moments in Time (MiT) and Memento10k datasets.
