C7
Prof. Dr. Võ
The genesis of object-context associations in hierarchically structured real-world environments
Object categorization cannot be understood without understanding an object’s context. Project C7 therefore aims to gain more insights into the genesis of hierarchical object-context associations in real-world scenes using a mixture of methods including eye tracking in VR, EEG, and computational modeling. Our human observers will learn artificial grammar in VR environments while recording eye movements and EEG. Using recent advances in computer vision, we will probe the genesis of scene grammar as DNNs learn to synthesize increasingly realistic images of real-world scene categories. Finally, we will investigate the development of object-context associations across the lifespan and beyond the perceptual domain.
new project-related publications
Beitner, J., Helbing, J., Draschkow, D., David, E. J., & Võ, M. L. H. (2023). Flipping the world upside down: Using eye tracking in virtual reality to study visual search in inverted scenes. Journal of Eye Movement Research, 15 (3). find paper
David, E., Gutiérrez, J., Võ, M. L.-H., Coutrot, A., Perreira Da Silva, M., & Callet, P. (2024). The Salient360! toolbox: Handling gaze data in 3D made easy. Computers & Graphics, 103890. find paper
Draschkow, D., David, E. J., & Võ, M. L.-H. (2023). Using XR (extended reality) for behavioral, clinical, and learning sciences requires updates in infrastructure and funding. Policy Insights from the Behavioral and Brain Sciences., 10(2), 317-323. find paper
Gregorová, K., Turini, J., Gagl, B., & Võ, M. L.-H. (2023). Access to meaning from visual input: Object and word frequency effects in categorization behavior. Journal of Experimental Psychology: General. find paper
Helbing, J., Draschkow*, D., & Võ*, M. L.-H. (2022). Auxiliary scene context information provided by anchor objects guides attention and locomotion in natural search behavior. Psychological Science, 33(9), 1463–1476. find paper
Kallmayer, A., Võ, M. L.-H., & Draschkow, D. (in press). Viewpoint-dependence and scene context effects generalize to depth rotated 3D objects. Journal of Vision. find paper
Klever, Võ, Islam, J., Võ, M. L.-H., & Billino, J. (2023). Aging attenuates the memory advantage for unexpected objects in real-world scenes. Heliyon (9). find paper
Krugliak, A., Draschkow, D., Vo, M. L. H., & Clarke, A. (2023). Semantic object processing is modulated by prior scene context. find paper
Turini, J., & Võ, M. L.-H. (2022). Hierarchical organization of objects in scenes is reflected in mental representations of objects. Sci Rep, 12, 20068. find paper
former project-related publications
Beitner, J., Helbing, J., Draschkow, D., & Võ, M. L.-H. (2021). Get your guidance going: Investigating the activation of spatial priors for efficient search in virtual reality. Brain Sciences, 11, 44, 1-17.
find paper,
DATA
Boettcher, S. E. P., Draschkow, D., Dienhart, E., & Võ, M. L.-H. (2018). Anchoring visual search in scene: Assessing the role of anchor objects on eye movements during visual search. Journal of
Vision, 18(13), 11.
find paper,
DATA
Cornelissen, T. H. W., & Võ, M. L.-H. (2016). Stuck on semantics: Processing of irrelevant object-scene inconsistencies modulates ongoing gaze behavior. Attention, Perception & Psychophysics, 79(1), 154-168.
find paper
David, E., Beitner, J., & Võ, M. L.-H. (2020). Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment. Brain Sciences, 10(11), 841.
find paper
David, E., Beitner, J., Võ, M. L.-H. (2021). The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality. Journal of Vision, 21(7), 3-3.
find paper,
DATA
Draschkow, D., & Võ, M. L.-H. (2016). Of “what” and “where” in a natural search task: Active object handling supports object location memory beyond the object’s identity. Attention, Perception & Psychophysics . 78, 1574-1584.
find paper
Draschkow, D., Wolfe, J.M., & Võ, M. L.-H. (2014). Seek and you shall remember: Scene semantics interact with visual search to build better memories. Journal of Vision, 14(8):10, 1–18.
find paper
Gregorová*, K., Turini*, J., Gagl, B., Võ, M. L.-H. (2021). Access to meaning from visual input: Object and word frequency effects in categorization behavior. PsyArXiv, [*equal contribution]
find preprint
Helbing, J., Draschkow, D., & Võ, M. L.-H. (2020). Search superiority: Goal-directed attentional allocation creates more reliable incidental identity and location memory than explicit encoding in naturalistic virtual environments. Cognition, 196, 104147.
find paper,
DATA
Josephs, E. L., Draschkow, D., Wolfe, J. M., & Võ, M. L.-H. (2016). Gist in time: Scene semantics and structure enhance recall of searched objects. Acta Psychologica , 169, 100–108.
find paper
Lauer, T., & Võ, M. L.-H. (2021). The ingredients of scenes that affect object search and perception. In: Ionescu B, Bainbridge WA, Murray N (Eds.).Human perception of visual information: Psychological and computational perspectives. Springer, in press. (available at: https://bit.ly/sfb135)
Lauer, T., Cornelissen, T. H., Draschkow, D., Willenbockel, V., & Võ, M. L. H. (2018). The role of scene summary statistics in object recognition. Scientific reports, 8(1), 1-12.
find paper,
DATA
Lauer, T., Schmidt, F., & Võ, M. L. H. (2021). The role of contextual materials in object recognition. Scientific reports, 11(1), 1-12.
find paper,
DATA
Lauer, T., Willenbockel, V., Maffongelli, L., & Võ, M. L.-H. (2020). The influence of scene and object orientation on the scene consistency effect. Behavioural Brain Research, 112812.
find paper,
DATA
Öhlschlager, S., & Võ, M. L.-H. (2016). SCEGRAM: An image database for semantic and syntactic inconsistencies in scenes. Behavior research methods .
find paper
Öhlschlager, S., & Võ, M. L.-H. (2020). Development of scene knowledge: Evidence from explicit and implicit scene knowledge measures. Journal of Experimental Child Psychology, 194, 104782.
find paper
Võ M. L.-H., & Wolfe, J. M. (2015). The role of memory for visual search in scenes. Annals of the New York Academy of Sciences . 1339, 72–81.
find paper
Võ, M. L.-H. (2021). The Meaning and Structures of Scenes. Vision Research, 181, 10-20.
find paper
Võ, M. L.-H., Aizenman, A. M. & Wolfe, J. M. (2016). You think you know where you looked? You better look again. Journal of Experimental Psychology: Human Perception and Performance , Vol 42(10), 1477-1481.
find paper
Võ, M. L.-H., Boettcher, S. E., & Draschkow, D. (2019). Reading scenes: How scene grammar guides attention and aids perception in real-world environments. Current Opinion in Psychology, 29, 205-210.
find paper
Wiesmann, S. L., Caplette, L., Willenbockel, V., Gosselin, F., & Võ, M. L. H. (2021). Flexible time course of spatial frequency use during scene categorization. Scientific Reports, 11(1), 1-13.
find paper