Study Description: In this study we asked normally-sighted young adults to locate objects in virtual reality under simulated visual impairment in the periphery of their visual field. Different magnitudes of blur were applied to each eye to assess the effect of an asymmetrical impairment on visual search. The main aim was to test whether participants' search times are solely determined by vision in the better eye, or whether impairments to the worse eye also have measurable impact on everyday visually guided actions. We also explored whether people compensate for their vision loss by making more eye and head movements. Details on data collection and design: Sample: We recruited 13 young adult human participants with (corrected to) normal vision and no known neural abnormalities, from an online study registration platform at the university. Participants were able to sign up for the study themselves and mainly included university students, so it was therefore an opportunity sample. Design and procedure: In the experiment participants were wearing a VR headset with eye-tracker built in, calibrated to the participants ocular properties. Participants were "spawned" into a particular location in a room in a virtual house built in the VR engine Unity, where they were asked to find a mobile phone located somewhere in the room. Once they located the phone they indicated this to the experimenter, who asked them to first make sure to look at the phone and then terminated the trial. We recorded looking direction of the participant to verify that the phone had indeed correctly been located. We used custom software to blur the periphery of the images of the room rendered on the VR display at 5 different levels ranging from zero blur to maximum blur. We manipulated blur of the left and right eye separately (so 5x5 conditions), and measured the effect of this asymmetry on visual search time. Each person underwent 250 phone searching trials (5 blur level left eye x 5 blur levels right eye = 25, with 10 trials each). Data collection and processing: The experiment was presented and data was collected using a combination of Unity and Tobii eye-tracking software in C++. Individual measures of interest per experimental trials (250 in total) have been exported and stored as .csv files. Description of .cvs individual participant datafile content: Version num = study version Playername = Initials_condition playerDOB = date of birth trialN = number of trial leftEyeVIindex = left eye visual impairment index (the amount of blur, 1 is max, zero is nothing) rightEyeVIindex = right eye visual impairment index (the amount of blur, 1 is max, zero is nothing) AsymmetryPercent = how much difference there is between the two impairments (leftEyeVIindex - rightEyeVIindex) trialTimer = trial duration in secs playerLocation_x = where the observer was spawned into the virtual space in cm (x-coord) playerLocation_y = where the observer was spawned into the virtual space in cm (y-coord) playerLocation_z = where the observer was spawned into the virtual space in cm (z-coord) targPosition_x = target location x in unity meters targPosition_y = target location y in unity meters targPosition_z = target location z in unity meters angleErrorAtStart_headPose_deg = angle between line drawn along head orientation, and line drawn from head to phone - START angleErrorAtStart_eyeGazeLeft_deg = angle between line drawn along left eye gaze orientation, and line drawn from head to phone - START angleErrorAtStart_eyeGazeRight_deg = angle between line drawn along right eye gaze orientation, and line drawn from head to phone - START angleErrorAtEnd_headPose_deg = angle between line drawn along head orientation, and line drawn from head to phone - END angleErrorAtEnd_eyeGazeLeft_deg = angle between line drawn along left eye gaze orientation, and line drawn from head to phone - END angleErrorAtEnd_eyeGazeRight_deg = angle between line drawn along right eye gaze orientation, and line drawn from head to phone - END wasAborted = trial was aborted if the participant failed to respond within 45 seconds, or they were presented with the biggest impairment in both eyes in the uniform case, as piloting showed that all participants were at floor. fieldLossType = uniform or periphery roomName = the name of the room the participant was in renderArea = the area of the 3D space that is included in the image rendering of the 3D space onto the 2D VR display locIdx = index of spawning location (there were 14 rooms, and 3 locations per room) phoneNum = index of each target location (there were 20 target locations per room) ** note that we also collected data of eye gaze and head movement during each trial for exploritative analyses. These data are not provided here, but are available on request.