Previous research has shown that exposure to within-person variability facilitates face learning. A different body of work has examined potential benefits of providing multiple images in face matching tasks. Viewers are asked to judge whether a target face matches a single face image (as when checking photo-ID) or multiple face images of the same person. The evidence here is less clear, with some studies finding a small multiple-image benefit, and others finding no advantage. In four experiments, we address this discrepancy in the benefits of multiple images from learning and matching studies. We show that multiple-image arrays only facilitate face matching when arrays precede targets. Unlike simultaneous face matching tasks, sequential matching and learning tasks involve memory and require abstraction of a stable representation of the face from the array, for subsequent comparison with a target. Our results show that benefits from multiple-image arrays occur only when this abstraction is required, and not when array and target images are available at once. These studies reconcile apparent differences between face learning and face matching and provide a theoretical framework for the study of within-person variability in face perception.