Spatial and Temporal Visual Integration, 2021

Rushton, Simon and Martin, Nick and Bossard, Martin (2024). Spatial and Temporal Visual Integration, 2021. [Data Collection]. Colchester, Essex: UK Data Service. 10.5255/UKDA-SN-855535

Every time we move, the image of the world at the back of the eye changes. Despite this, our perception is of an unchanging world. How does the brain translate a continually changing image into a percept of a stable, stationary, rigid world? Does the brain use a map of the external environment (an "allocentric map") and the position of the observer within it, built up over time, to underpin the perception of stability? Does the brain continually update a map of where scene objects are relative to the observer (an "egocentric map"; e.g. there is an object straight ahead of me, if I walk forward I should expect it to get closer to me)? Does the brain not create a map but just divide up the image motion into that which is likely due to movement of the observer (and which can consequently be ignored) and that which is likely due to objects moving within the scene (which become a focus of attention)? The hypothesis that underpins this research project is that it is not a single one of these mechanisms that underpins perceptual stability, but that all of them, their contribution dependent on the task being performed by the observer. In some cases the task will require a fast estimate to support an ongoing action which might favour one mechanism, on another task, where timing is not so critical, a slower, but more accurate, mechanism might be more appropriate. This collaborative project, which combines complementary expertise in Psychology, Movement Sciences, and Computing from Germany, The Netherlands and the United Kingdom, and importantly, researchers that start from different theoretical perspectives, will test this hypothesis. We will study a diverse series of tasks that present a range of challenges to the moving observer. We will make use of various innovative experimental paradigms that exploit recent technological advances such as virtual reality combined with simultaneous motion tracking. Understanding where and how different mechanisms of perceptual stability play a role advances not only our scientific understanding, but also has the potential to inform industry as well as medicine about the circumstances in which disorientation or nausea in real or virtual environments can be minimised.

Data description (abstract)

We present four psychophysics experiments investigating spatiotemporal summation in various visual contexts. The experiments are 4AFC detection tasks where a target is briefly represented in one of four known locations. Stimuli consist of targets with various spatial (0 to .9dva) or temporal (0 to 100ms) properties. Staircase procedures are used to identify luminance thresholds at which the targets are detected with 75% accuracy. Lower thresholds are taken as indictive of greater summation than higher thresholds. In the first experiment, the target stimuli consist of two probes presented with varying spatial (0 to .9dva) and temporal (0 to 100ms) separation. We find an interaction between spatial and temporal integration: as spatial separation increased, the temporal separation at which the probes are most easily detected also increased. That is, probes with moderate spatial separation are more easily detected when they also have temporal separation compared to when the two probes are presented simultaneously. In a second study, targets consist of a single probe, presented for various durations (8 to 100ms), with the aim of identifying the critical period during which complete summation occurs (e.g., Bloch’s critical duration). Similarly, a third experiment used targets presented for 8ms with varying lengths of .01 to .9dva to identify the spatial area of complete summation (e.g., Ricco’s area). A fourth study builds on the first experiment, but rather than a single target appearing in one of four locations, three targets are presented and participants identify the location which did NOT contain a target. Again, we find that for conditions with spatial separation, participants had higher lower thresholds when the probes were also temporally separated. Experiments 1 and 4 provide robust evidence that detection thresholds can be lower for temporally separated targets than for concurrently presented targets. The results from experiments 1 and 4 do not align with the Ricco area or Bloch’s critical duration. However, the results can be interpreted in terms of a facilitating effect from motion detectors. Experiment 4 suggests that this effect is the product of multiple local mechanisms, rather than due to some global motion processing.

Data creators:
Creator Name Affiliation ORCID (as URL)
Rushton Simon Cardiff University
Martin Nick Cardiff University https://orcid.org/my-orcid?orcid=0000-0001-7205-6984
Bossard Martin
Sponsors: Economic and Social Research Council
Grant reference: ES/S015272/1
Topic classification: Psychology
Keywords: PSYCHOLOGY, PSYCHOLOGICAL EFFECTS, PSYCHOLOGICAL RESEARCH, HUMAN BEHAVIOUR
Project title: ORA (Round 5): The active observer
Alternative title: ORA Round 5: The Active Observer, Exp1-integration, 2021
Grant holders: Simon Keith Rushton
Project dates:
FromTo
1 March 201931 December 2023
Date published: 05 Jan 2024 12:22
Last modified: 05 Jan 2024 12:22

Available Files

Data and documentation bundle

Downloads

data downloads and page views since this item was published

View more statistics

Altmetric

Website

ORA (Round 5): The active observer

Edit item (login required)

Edit Item Edit Item