royhessels

The new dual eye-tracking setup

Over the past years, we have used a self-built (by Tim Cornelissen) dual eye-tracking setup in our lab to investigate eye movements during dyadic interaction. Through a recent grant, we have had an updated version constructed by a professional constructor. The setup is back in a nice black metal look:

My colleague Gijs Holleman with the new dual eye-tracking setup.

Here is a short example of the videos and eye-tracking data recorded using this setup:

In order to map eye-tracking data recorded at one side of the dual eye-tracking setup unto the video recorded at the other end, we use an automatic Area-of-Interest construction method based on Voronoi Tesselation and OpenFace facial landmark detection. It is freely available at the OpenScienceFramework.

The method is validated in:
Hessels, R. S., Benjamins, J. S., Cornelissen, T. H. W., & Hooge, I. T. C. (2018). A Validation of Automatically-Generated Areas-of-Interest in Videos of a Face for Eye-Tracking Research. Frontiers in Psychology, 9(1367), 1–8. http://doi.org/10.3389/fpsyg.2018.01367


royhessels

Analysing low-quality eye-tracking data

I’ve been involved in many eye-tracking studies with infants and young children as the participant group. While eye tracking can provide valuable insights into (cognitive) development, eye-tracking data obtained from infants and children are generally of lower quality as compared with eye-tracking data from adults. This is in part due to the fact that infants are difficult to restrain in their movement. I’ve been involved in two eye-tracker tests in which we compared eye trackers on their robustness to movement (view the first and the second here). Moreover, I’ve developed a fixation-classification algorithm that is specifically built for eye-tracking data of low quality. The software is freely available from GitHub.