Abstract When making a turn at a familiar intersection, we know what items and landmarks will come into view. These perceptual expectations, or predictions, come from our knowledge of the context, however it’s unclear how memory and perceptual systems interact to support the prediction and reactivation of sensory details in cortex. To address this, human participants learned the spatial layout of animals positioned in a cross maze. During fMRI, participants navigated between animals to reach a target, and in the process saw a predictable sequence of five animal images. Critically, to isolate activity patterns related to item predictions, rather than bottom-up inputs, one quarter of trials ended early, with a blank screen presented instead. Using multivariate pattern similarity analysis, we reveal that activity patterns in early visual cortex, posterior medial regions, and the posterior hippocampus showed greater similarity when seeing the same item compared to different items. Further, item effects in posterior hippocampus were specific to the sequence context. Critically, activity patterns associated with seeing an item in visual cortex and posterior medial cortex, were also related to activity patterns when an item was expected, but omitted, suggesting sequence predictions were reinstated in these regions. Finally, multivariate connectivity showed that patterns in the posterior hippocampus at one position in the sequence were related to patterns in early visual cortex and posterior medial cortex at a later position. Together, our results support the idea that hippocampal representations facilitate sensory processing by modulating visual cortical activity in anticipation of expected items.