Can relevance of images be inferred from eye movements?
Searching for images from a large collection is a difficult task for automated algorithms. Many current techniques rely on items which have been manually 'tagged' with descriptors. This situation is not ideal, as it is difficult to formulate the initial query, and navigate the large number of hits returned. In order to present relevant images to the user, many systems rely on an explicit feedback mechanism. A machine learning algorithm can be used to present a new set of relevant images to the user -- thus increasing hit rates. In this work we use eye movements to assist a user when performing such a task, and ask this basic question: "Is it possible to replace or complement scarce explicit feedback with implicit feedback inferred from various sensors not specifically designed for the task?" We give initial results on a range of tasks and experiments which extend those presented in the Multimedia Information Retrieval conference (MIR'08). In reasonably controlled setups, fairly simple eye movements’ features in conjunction with machine learning techniques are capable of judging the relevance of an image based on eye movements alone, without using any explicit feedback -- therefore potentially assisting the user in a task.