Transcription

PROCEEDINGS of the Seventh International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle DesignVIDEO TEST TO EVALUATE DETECTION PERFORMANCEOF DRIVERS WITH HEMIANOPIA AND QUADRANOPIA: PRELIMINARY RESULTSAlex R. Bowers1, 2, Karen Jeng3, Eli Peli1, 2, Laura Werner2 & Amy Doherty11Schepens Eye Research Institute, Mass Eye and Ear, Harvard Med School, Boston,Massachusetts, USA2New England College of Optometry, Boston, Massachusetts, USA3UMDNJ-Robert Wood Johnson Medical School, Piscataway, New Jersey, USAEmail: alex [email protected]: The ability of individuals with hemianopia to compensate for theirvision impairment by eye/head scanning to detect hazards in their non-seeing(blind) hemifield varies widely in both simulator and on-road tests. Conventionalvisual fields tests do not reflect this variability, while simulator and on-road testsare time-consuming and expensive. We therefore developed a simple, 15-minutevideo-based pedestrian detection test suitable for implementation on a desktopcomputer and monitor. The test was found to be sensitive to detection deficits inboth hemianopia and quadranopia, and predictive of detection performance in adriving simulator. Our preliminary findings suggest that the test provides a simplemethod of measuring detection ability relevant to driving which may be usefulboth as a screening test and as an evaluation tool for rehabilitation devices andtraining.BACKGROUND AND OBJECTIVESHemianopia is the loss of half of the visual field on the same side in both eyes that commonlyoccurs due to stroke and traumatic brain injury. Twenty-two states prohibit people withhemianopia from driving as they do not meet the minimum field extent requirement (e.g. 120 inMassachusetts) (Peli, 2002). However, in other states they may be permitted to drive, and insome countries (e.g., Belgium, Netherlands, Switzerland, UK, Canada) persons with hemianopiamay be licensed after taking a specialized road test (Dow, 2011; DVLA Drivers Medical Group,2011; Yazdan-Ashoori & ten Hove, 2010).Visual field extent is usually measured under simple visual conditions: detection of a white lighton a plain uniform background while looking at a central point. However, in driving, detection ofobjects occurs against complex backgrounds and the driver is free to make eye and headmovements. People with hemianopia may be able to compensate for the field loss by eye andhead scanning; however, conventional visual field tests can not reflect such ability. Althoughindividuals with complete hemianopia may appear to have identical amounts of vision loss on avisual fields test, their ability to use their remaining vision in more complex tasks, such asdetection of objects when walking and driving, varies widely (Bowers et al., 2009; Iorizzo et al.,2011; Papageorgiou et al., 2012). For example, in a driving simulator study, detection rates forpedestrians that appeared on the side of the blind hemifield varied from as little as 6% to as highas 100%, yet all the participants had complete hemianopia, no significant cognitive impairmentand no spatial neglect (Bowers et al., 2009).248

PROCEEDINGS of the Seventh International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle DesignThus there is a need for a test that measures an individual’s ability to use remaining vision for atask that is relevant to driving and uses more real-world conditions than conventional visualfields tests, but is simple enough to be used during routine screening or as an outcome measurein multicenter clinic-based studies. One possibility would be to use a driving simulator (Bowerset al., 2009; Bronstad et al., 2011; Papageorgiou et al., 2012); however, use of a simulator isexpensive and time consuming, and requires specialized equipment and expertise. We developeda flexible, video-based detection test. Rather than using videos of real world driving scenes inwhich there is little control of when and where hazards appear, we based our test on a pedestriandetection task we previously developed for our driving simulator, which was shown to besensitive to detection deficits in hemianopia, partial hemianopia, and central visual field loss(Bowers et al., 2009; Bronstad et al., 2013; Bronstad et al., 2011).In this preliminary study, we evaluated whether a prototype version of the new video test wouldprovide a robust measure of detection performance for people with hemianopia and quadranopia(loss of the same quarter of the field of vision in both eyes). We predicted that detectionperformance would be worse (lower detection rates and longer reaction times) for pedestriansappearing on the blind side than the seeing side, and that blind-side detection performance ofhemianopes would be worse than that of quadranopes. In addition, we evaluated test-retestrepeatability for the video test and examined preliminary validity by comparing detectionperformance on the video test to detection performance on the same task in the driving simulator.We expected that the video test performance would be predictive of driving simulatorperformance, but that detection rates would be lower and reaction times longer for the morecomplex, more interactive and cognitively-demanding driving simulator task.METHODSParticipants and ProceduresParticipants. Participants were recruited from the pool of visually-impaired volunteers who hadpreviously participated in studies at Schepens. Inclusion criteria were: hemianopia orquadranopia on conventional visual field testing (Goldmann perimeter); binocular visual acuityof 20/40 or better with habitual correction; no significant cognitive decline (Mini Mental StateExamination 24); and no visual neglect (Bells Test and Schenkenberg Line Bisection Test).Data are reported for 13 participants (7 hemianopes, 6 quadranopes) with a median age of 59years (range 19 to 83). Stroke was the main cause of the field loss (9 patients). Participants eachcompleted one driving simulator session and two video detection test sessions. The study wasapproved by the institutional review board at Schepens.Pedestrian detection task. The same pedestrian detection task was used for the video test and thedriving simulator test. In brief, life-size pedestrian figures (1.8m tall, wearing a white shirt andblue trousers; Figure 1) appeared at random intervals and walked or ran (exhibiting salientbiological motion) toward the roadway, as if to cross the road (Bronstad et al., 2013). Thepedestrians were programmed such that there would have been a collision if the virtual vehiclehad continued at the posted speed limit and the virtual pedestrian had entered the travel lane.However, pedestrians stopped at the edge of the travel lane. A total of 60 pedestrians werepresented (30 on the right and 30 on the left), appearing with equal frequency at small (4 ) and249

PROCEEDINGS of the Seventh International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle Designlarge (14 ) eccentricities. Participants pressed a response button (video test) or the horn (drivingsimulator) as soon as they detected a pedestrian. They were allowed to make natural head andeye movements throughout the task, but were not given any instructions about specific head oreye movement scanning strategies.Figure 1. Screen shot of pedestrian detection taskDriving simulator test. Participants completed five drives (3 in the city and 2 on the highway),each about 10 minutes, in a FAAC PP-1000 driving simulator (with 225 horizontal field ofview, a motion seat, and controls typical for an automatic transmission car). They wereinstructed to follow the rules of the road and control the vehicle speed and steering (in additionto pressing the horn when a pedestrian was seen). Eye and head movements were recorded with a6-camera remote infra-red Smart-Eye Pro 5 tracking system (Goteborg, Sweden).Creating the video test. Videos were recorded directly from the central screen of the drivingsimulator while an investigator, familiar with the simulator and scenarios, drove a series of cityand highway drives at the posted speed limit and in the center of the lane to ensure that therewere no cues as to when pedestrians appeared and that they appeared at the expectedeccentricities. Each recording was about 10 minutes in duration with 12 pedestrian appearances.While 10 minutes was appropriate for each simulator test drive, it was too long for the videodetection test. Segments were therefore extracted from the recordings to create five (3 city and 2highway) shorter, 3-minute videos using VideoPad Video Editor (NCH Software, Canberra,Australia, v 2.41). Stops and turns were excluded to reduce the potential for motion sickness.Each short video included 12 segments with pedestrians and 5 to 9 segments without pedestrians.The time between pedestrian appearances ranged from 7 to 30 seconds. Within each video, therewere equal numbers of pedestrians on each side (right/left) and at each eccentricity (small/large).Administering the video test. Participants sat 1.3m away from a large rear-projection screen. Acustom program played each 3-minute video continuously and recorded the time at which thebutton was pressed. At this distance the screen subtended the same visual angle as the centralscreen of the driving simulator (65 horizontal by 40 vertical). Eye and head movements werenot recorded.Data AnalysisResponses recorded during the experimental sessions were used to determine detection rates andreaction times (difference between pedestrian appearance time and participant response time).Nonparametric statistics (Wilcoxon Signed Ranks and MannWhitney U tests) were used to250

PROCEEDINGS of the Seventh International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle Designexamine the effects of visual field loss and pedestrian eccentricity on detection rates and reactiontimes. Test-retest repeatability was quantified in terms of the difference in detection performancebetween the two video test sessions. The coefficient of repeatability is usually given as twice thestandard deviation of the mean of the differences, i.e. the 95% confidence limit (Bland &Altman, 1986). However, as the difference data were not normally distributed, we expressed thecoefficient of repeatability as half of the range between the 5th and 95th percentiles. Spearman’srho was used to quantify the relationship between video test performance and driving simulatorperformance.RESULTSVideo Test: Effect of Visual Field LossDetection rates were significantly lower on the blind than the seeing side (medians 77% and100% respectively; z 2.5, p 0.012; Figure 2), and reaction times were significantly longer(medians 0.93s and 0.79s; z 2.8, p 0.005; Figure 2). This analysis was for data from allparticipants pooled across small and large eccentricities. In addition, we examined the effects ofeccentricity (small or large) and amount of field loss (hemianopia or quadranopia) on blind-sidedetection performance. There was a trend for hemianopes to have more impaired blind-sidedetection performance than the quadranopes (lower detection rates and longer reaction times;Figure 2); however, there was a wide range of performance in both groups, especially at largeeccentricities. Both hemianopes and quadranopes had lower detection rates and longer reactiontimes at the large than the small eccentricity on the blind side (Figure 3); however, the detectionrate differences were noticeably greater for the hemianopes than the quadranopes (Figure 3).Figure 2. Median detection rates and reaction times on blind and seeing sidesDetection rates were lower and reaction times longer on the blind than the seeing side.Data are collapsed across small and large eccentricities. Open circles are outliersFigure 3. Blind-side detection rates and reaction times for small and large eccentricitiesDetection rates were lower and reaction times longer at the large eccentricity251

PROCEEDINGS of the Seventh International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle DesignVideo Test-Retest RepeatabilityOverall, there was no significant between-session differences in detection rates for either theblind side (median difference 0%, z 1.05, p 0.293; Figure 4, left) or the seeing side (constantat 100% for both sessions). The coefficient of repeatability was 10% for blind side detectionrates; only 4 participants had an absolute between-session difference of more than 5%. Therewas a trend for blind-side reaction times to be slightly faster at the second session (mediandifference 0.08s, z 1.77, p 0.077; Figure 4, middle), but there was no between-sessiondifference for seeing-side reaction times (median 0.03s, z 0.83, p 0.409; Figure 4, right). Thecoefficient of repeatability was 0.62s for blind-side reaction times and 0.28s for seeing-sidereaction times. For eight participants, blind-side reaction times at the second session were within0.4s of those at the first session; however there were four notable outliers with between-sessiondifferences 0.5s (Figure 4, middle).Figure 4. Differences in detection performance between the two video test sessionsThe solid line is the median of the differences. The dashed lines are the 5th and 95th percentilesComparison to Driving SimulatorFor the comparison of video test and driving simulator performance, detection rates and reactiontimes were averaged across the two video test sessions. Higher blind-side detection rates on thevideo test were associated with higher blind-side detection rates in the driving simulator (rho 0.78, p 0.002; Figure 5, left). However, there was wide between-subject variability, with someparticipants having better detection rates in the video test and others having better detection ratesin the driving simulator. Overall, there was no significant difference in detection rates betweenthe video and simulator test (medians, 77% and 85%, respectively, z 0.3, p 0.767). Similarly,longer reaction times in the video test were associated with longer reaction times in the drivingsimulator (rho 0.60, p 0.03; Figure 5, right). However, as expected, reaction times wereshorter on the video test than in the driving simulator (0.93s and 2.67s, respectively, z 3.2 p 0.001). Again, there was wide between-subject variability.252

PROCEEDINGS of the Seventh International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle DesignFigure 5. Blind-side detection performance for the video and driving simulator tests for each participantThe diagonal dashed line (y x) is for reference purposes onlyDISCUSSIONOur video detection test was sensitive to detection deficits in both hemianopia and quadranopia.Detection rates were lower and reaction times longer on the blind than seeing sides with thedeficits being greater at large than small eccentricities. In agreement with prior studies usingmore complex simulated driving tasks (Bowers et al., 2009; Iorizzo et al., 2011; Papageorgiou etal., 2012), there was a wide range of performance within both groups. This suggests that, eventhough the video test involved only watching videos without the higher cognitive load orinteractivity of a driving simulator test, it was still sensitive enough to capture between-subjectdifferences in the ability to compensate for hemianopic and quadranopic visual field loss, avariability that is not captured by conventional perimetry.In general, detection rate repeatability was good. However, four participants did demonstraterelatively large ( 0.5s) between-session differences in blind-side reaction times (Figure 4,middle). One had slower reaction times at the second session suggesting fatigue effects, whilethree had faster reactions, suggesting that they might have been using a more efficient scanningstrategy at the second session. As this was a pilot study, eye movements were not recordedduring the video test sessions and we could not formally test this hypothesis.Participants with better blind-side detection rates and faster reaction times on the video testtended to have better blind-side detection rates and faster reaction times in the driving simulator,providing preliminary evidence in support of the validity of the video test. As expected, reactiontimes were significantly faster for the video than the driving simulator test as participants did nothave to steer the vehicle or perform any other driving tasks. Although median detection rateswere not significantly different for the two tests, there were two participants with low drivingsimulator detection rates (about 30%) who had much higher detection rates (70 – 80%) in thevideo test (Figure 5, left). These participants demonstrated little blind-side scanning in thedriving simulator. It is likely that they scanned more frequently to the blind side during the videotest, as it was much less demanding than driving in the simulator.In summary, these preliminary data suggest our video test provides a simple method ofmeasuring detection ability relevant to driving in people with hemianopia and quadranopia. Ittakes only 15 minutes to administer and, in the future, could be implemented on a desktopcomputer and large TV monitor. Furthermore, it did not cause any participant discomfort (unlike253

PROCEEDINGS of the Seventh International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle Designdriving simulator tests which may be uncomfortable for up to 25% of participants). We arecontinuing to develop and refine the test, including adding a secondary task that will simulate theadditional cognitive demands and divided attention conditions of real world driving.ACKNOWLEDGEMENTSFunded in part by TATRC Military Vision Research Program, Proposal 11066002 (subtask 10),and NIH grants EY018680, T35EY007149, EY12890, and P30EY003790. The authors thankAlex Hwang, Henry Apfelbaum, Jih-Ping Chern, and Robert Goldstein for technical assistancewith programming, video editing, and development of analysis software.REFERENCESBland, J. M., & Altman, D. G. (1986). Statistical methods for assessing agreement between twomethods of clinical measurement. Lancet, 1(8476), 307-310.Bowers, A. R., Mandel, A. J., Goldstein, B., & Peli, E. (2009). Driving with hemianopia: 1.Detection performance in a simulator. Investigative Ophthalmology and Visual Science, 50,5137-5147.Bronstad, P. M., Bowers, A. R., Albu, A., Goldstein, R. B., & Peli, E. (2013). Driving withcentral visual field loss I: Impact of central scotoma on response to hazards. JAMAOphthalmology, Published online January 17, , P. M., Bowers, A. R., Albu, A., Goldstein, R. G., & Peli, E. (2011). Hazard detectionby drivers with paracentral homonymous field loss: A small case series. Journal of Clinical& Experimental Ophthalmology, S5:001.doi: 010.4172/2155-9570.S4175-4001.Dow, J. (2011). Visual field defects may not affect safe driving. Traffic Injury Prevention, 12(5),483-490.DVLA Drivers Medical Group. (2011). For medical practitioners. At a glance guide to thecurrent medical standards of fitness to drive. Swansea, UK: Driver Vehicle LicensingAuthority.Iorizzo, D. B., Riley, M. E., Hayhoe, M., & Huxlin, K. R. (2011). Differential impact of partialcortical blindness on gaze strategies when sitting and walking - An immersive virtual realitystudy. Vision Research, 51(10), 1173-1184.Papageorgiou, E., Hardiess, G., Ackermann, H., Wiethoelter, H., Dietz, K., Mallot, H. A., &Schiefer, U. (2012). Collision avoidance in persons with homonymous visual field defectsunder virtual reality conditions. Vision Research, 52(1), 20-30.Peli, E. (2002). Low Vision Driving in the USA: who, where, when and why. CE Optometry,5(2), 54-58.Yazdan-Ashoori, P., & ten Hove, M. (2010). Vision and Driving: Canada. Journal of NeuroOphthalmology, 30(2), 177-185.254

video-based pedestrian detection test suitable for implementation on a desktop computer and monito. The test was r found to be sensitive to detection deficits in both hemianopia and quadranopia, and predictive of detection performance in a . 6-camera remote infr