OverviewUMDFaces is a face dataset divided into two parts:
Part 1 - Still ImagesThe dataset contains 367,888 face annotations for 8,277 subjects divided into 3 batches. We provide human curated bounding boxes for faces. We also provide the estimated pose (yaw, pitch, and roll), locations of twenty-one keypoints, and gender information generated by a pre-trained neural network.
In addition, we also release a new face verification test protocol based on batch 3.
Part 2 - Video FramesThe second part contains 3,735,476 annotated video frames extracted from a total of 22,075 for 3,107 subjects. Again, we also provide the estimated pose (yaw, pitch, and roll), locations of twenty-one keypoints, and gender information generated by a pre-trained neural network.
DownloadBefore proceeding to download, please read the license carefully. The complete download details and instructions can be found in the release document. Also, please read the Readme.
Specifically, part 1 of the dataset can be downloaded from the following links: release document and our paper for more details.)
caffe model and demo code for the fiducial keypoint detection.
The video frames for part 2 of the dataset can be downloaded from here (195GB). The corresponding bounding box annotations, keypoint locations, pose, and gender information can be found in this file.
If you want to download the corresponding videos (1.2TB), please contact Ankan Bansal.
ErrataPlease note that this latest release of the dataset corrects some mistakes (subjects repeated with different names) in the older version of the dataset. For continuity purposes, the older version of the dataset can still be found on the original download link.
ReferencesIf you use our dataset or model, please cite our papers:
Last Modified: May 23rd, 2017. Please direct comments to Ankan Bansal