Abstract
We move our eyes around the visual world in order to fixate objects or regions of interest. According to one theory of integration across saccadic eye movements, prior to the saccade, critical locating features of the saccade target arc encoded, and then retained across the saccade in order to facilitate locating the target upon the next fixation (McConkie & Currie, 1996). The critical locating features would be maximally informative if a) multiple features arc retained, thereby allowing verification; and b) features are retained that differentiate the actual saccade target from its neighboring items, including contextual cues. The current paper reports two experiments that test these ideas. Specifically, Experiment 1 uses a transsaccadic version of the Luck and Vogel (1997) paradigm to show that multiple features can be encoded across an eye movement without cost. Experiment 2 uses a transsaccadic version of the Jiang, Olson, and Chun (2000) contextual cueing paradigm to show that objects are encoded in relation to one another. Together, these findings suggest that different types of features can serve as critical locating features, with such features prioritized by the allocation of attention to the saccade target prior to the saccade.
