Photo Print Attacks Dataset
Check samples on Kaggle

Introduction
The Photo Print Attacks Dataset offers a specialized resource for enhancing Presentation Attack Detection (PAD) models, especially suited for assessing liveness detection. With over >3,000 unique individuals, this dataset is an invaluable asset for AI developers aiming to improve anti-spoofing capabilities. Used by both iBeta and NIST FATE, this dataset is structured to support advanced AI model training focused on detecting photo print attacks
Dataset summary
This dataset includes over >7,000 print photo attacks, featuring diverse participants with a balanced representation of gender and ethnicity. Each attack is captured in a 10-20 second video that meets standards for liveness detection, including high-quality imagery and realistic color to simulate authentic conditions
Source and collection methodology
The data collection process involved a large group of participants and carefully staged photo print attacks. Each attack video employs a zoom-in effect, as specified by NIST FATE, enhancing the AI’s ability to recognize print attacks versus live subjects. Flat photos were used to ensure accuracy, with no bending or skewing, providing a consistent, straight-on view of the camera
Use cases and applications
This dataset is ideal for developing and refining liveness detection models that need to reliably differentiate between genuine selfies and photo print attacks. It’s particularly beneficial for organizations working on facial recognition and biometric authentication, aiming to improve the accuracy of spoof detection in PAD systems
Dataset features
- 3,000+ Participants: Engaged in the project
- Diverse Representation: Balanced mix of genders and ethnicities
- 3,000+ Photo Print Attacks: Executed on the participants
Photo print attack description
- Each attack comprises of 10-20 sec. video with Zoom in effects
- High-quality photos with realistic colors
- No visible image borders during the Zoom-in phase
- Paper attacks conducted on flat photos with a straight view on the camera (not bent or skewed)
Download information
A sample version of this dataset is available on Kaggle. Leave a request for additional samples in the form below
Dataset details
This dataset is specifically designed for assessing liveness detection algorithms, as utilized by iBeta and NIST FATE. It is curated to train AI models in recognizing photo print attacks targeting individuals. These attacks encompass Zoom effects, as recommended by NIST FATE to enhance AI training outcomes
- >1K people
- 15-20 sec length of Print attacks
- High-quality photos with realistic colors
- Various capturing devices were used
Best is used for:
- Liveness detection
- Antispoofing attack detection
Contact us
Tell us about yourself, and get access to free samples of the dataset