Check samples on Kaggle
The iBeta Level 1 Paper Attacks Dataset provides a comprehensive collection of paper-based spoofing attacks specifically designed for Presentation Attack Detection (PAD) testing at iBeta Level 1. This dataset is tailored for researchers and developers working on liveness detection, offering a wide range of paper mask attack variations to aid in training AI models for anti-spoofing applications
The dataset includes over 18,000 paper mask attacks performed by 40+ participants, with a balanced representation across gender and ethnicity (Caucasian, Black, and Asian). Each attack sequence is recorded on both iOS and Android devices, offering varied perspectives and multi-frame, 10-second videos to support active liveness detection
The data collection process involved real-life selfies and videos from participants, followed by multiple paper attack types, such as print, cutout, cylinder, and 3D mask attacks. Each video includes zoom-in and zoom-out phases to enhance the dataset’s application in active liveness detection, simulating realistic spoofing attempts.
This dataset is ideal for teams focused on liveness detection and PAD model training. It’s especially valuable for developers preparing their models for iBeta certification, as it includes a comprehensive set of spoofing scenarios required for level 1 testing.
One of our partners tested our dataset and a competitor’s dataset using their own liveness detection model while preparing for iBeta Level 1 certification. The results show a clear difference in difficulty between the two datasets. Both datasets were tested on a sample of approximately 200 attack attempts each, ensuring a fair comparison.
• Our dataset presents a greater challenge for liveness detection models. The model frequently misclassified attack images as real (label 0), meaning our spoofing techniques are more advanced and harder to detect.
• The competitor’s dataset, on the other hand, was mostly detected as attacks (score 1), except for a single type of attack where the model showed some uncertainty.
This demonstrates that our dataset provides more value for training robust liveness detection models, as it exposes them to more deceptive and realistic attacks.
Understanding the score:
• Score = 1 → Attack detected (label 1, red dots)
• Score = 0 → Model thinks it’s a real person (label 0, green dots)
By training on a more challenging dataset, models can significantly improve their spoof detection capabilities, making them more resilient against real-world threats.
Some of the spoof attacks in our dataset were tested on Doubango, a leading 3D liveness detection framework.
Doubango performs advanced 3D liveness checks using a single 2D image and claims to outperform market leaders like FaceTEC, BioID, Onfido, and Huawei in both speed and accuracy.
During testing, our attack images bypassed Doubango’s security checks, with the system generating green bounding boxes around the faces (indicating acceptance as “live” users). This confirms that the attacks were not flagged as spoofs, demonstrating their ability to trick even high-performance systems.
These results highlight the quality of our dataset for training robust anti-spoofing models capable of defending against evolving threats in real-world scenarios.
1. Real life selfie & videos from participants
Genuine facial data collected in various lighting conditions and angles to ensure robust system evaluation.
2. Print and Cutout paper attacks
Attackers use printed photos or cutout masks with eye mouth holes to trick recognition systems.
3. Cylinder attack to create volume effect
A printed face is wrapped around a cylindrical object to simulate a 3D structure. This method is effective in deceiving simple 2D detection algorithms.
4. Paper attacks on Actor with head/eyes variations
A paper face is placed over a real person’s head to mimc real facial movement. Variation include blinking, head tilts, and expressions to test system resilience.
5. 3D paper masks with volume based elements such as nose
High-quality 3D masks icorporate raised features sucj as a nose to enhance realism. More challenging for liveness detection algorithms.
A sample version of this dataset is available on Kaggle. Leave a request for additional samples in the form below
This dataset is specifically designed for assessing liveness detection algorithms, as utilized by iBeta and NIST FATE. It is curated to train AI models in recognizing photo print attacks targeting individuals. These attacks encompass Zoom effects, as recommended by NIST FATE to enhance AI training outcomes
Best is used for:
Tell us about yourself, and get access to free samples of the dataset
© 2022 – 2024 Copyright protected.