Reading images is a quite common task; radiologists looks at X-rays, airport screeners scan suitcases, and astronomers inspect images from telescopes. In many of these visual search tasks, the outcome is important. We don’t want the airport screens to miss a weapon, or the radiologist to miss any lesions. In a paper we presented at the recent Eye Tracking Research and Application Symposium (ETRA 2010), we looked into how information of where people have looked can be used to guide them to parts of images not yet examined.
In the paper we describe a two-phase visual inspection method: in the first phase, the inspector views the image freely, and in the second phase the inspector is given feedback to guide attention to previously-skipped portions of the image. During the first phase, gaze information is collected using an eye tracker, and in the second phase parts of the image that the user had looked at are masked or blocked.
In our study, we found that blocking out parts of the image where the people had looked during the first phase reduced the subjective mental workload. In other words, people experienced the visual search task as less demanding when the system indicated where they looked during the first phase. This is an important finding since image analysts may view images for a long time, and the chance of errors gets higher with fatigue. If it is possible to reduce the mental effort, it would take longer for the analyst to get tired and their performance would stay more stable.
We also found an increase in of the number of targets identified. In particular, we found that the number of targets that a person had not viewed during the first phase, but identified during the second phase were significant more when the gaze was masked during the second phase.
The paper also got a nice writeup in the New Scientist.