Significant advances have been made in understanding human face recognition. However, a fundamental aspect of this process, how faces are located in our visual environment, is poorly understood and little studied. Here we examine the role of color in human face detection. We demonstrate that detection performance declines when color information is removed from faces, regardless of whether the surrounding scene context is rendered in color. Furthermore, faces rendered in unnatural colors are hard to detect, suggesting a role beyond simple segmentation. When faces are presented such that half the surface is colored appropriately, and half unnaturally, performance declines. This suggests that observers are not simply using the presence of skin color "patches" to detect faces. Rather, our data suggest that detection operates via a face template combining diagnostic color and face-shape information. These findings are consistent with color-template approaches used in some computer-based face detection systems.