OPEN SCIENCE

EgoBlur: a set of AI models designed

to preserve privacy by detecting and

blurring PII from images

INTRODUCING EGOBLUR GEN 2

EgoBlur Gen 2 protects user privacy by detecting and blurring PII from images on egocentric devices.

Screenshot of Input Video and Novel View Rendering

EgoBlur Gen 2 is Meta’s next-generation open-source solution designed to preserve privacy in egocentric video and imagery. Building on the success of EgoBlur, this update suite provides researchers and developers with the tools to automatically detect and obscure personally identifiable information (PII), such as faces and license plates, captured by Project Aria Gen 2 devices and other head mounted cameras.

How is EgoBlur Gen 2 Evaluated?

To evaluate performance and reduce bias, EgoBlur is benchmarked against the Aria Gen 2 Pilot Dataset and CCV2 Dataset.

Self-reported ‘responsible AI labels’ from the CCV2 dataset are used to evaluate EgoBlur against a number of attributes such as skin tone, self identified gender, age, and country. This helps to ensure EgoBlur works consistently for everybody.

Screenshot of Input Video and Novel View Rendering

Key Enhancements with EgoBlur Gen 2

EgoBlur Gen 2 is tuned for egocentric data, and trained using data from Aria Gen 2 hardware. Plus native support for high-resolution VRS, PNG, JPEG, and MP4 files.

Unlike previous versions, the Gen 2 command-line tool automatically preserves the input video’s original FPS, ensuring the blurred output remains perfectly synced with the source material.

Read the accompanying EgoBlur Research Paper

For more information about the EgoBlur model, read our paper on arXiv.

EGOBLUR RESEARCH PAPER
A screenshot from the EgoBlur research paper.

BibTex Citation

If you use the EgoBlur Model in your research, please cite the following:

@misc{raina2023aria,
  title		={EgoBlur Model},
  author	={Nikhil Raina and Guruprasad Somasundaram and Kang Zheng and Sagar Miglani and Steve Saarinen and Jeff Meissner and Mark Schwesinger and Luis Pesqueira and Ishita Prasad and Edward Miller and Prince Gupta and Mingfei Yan and Richard Newcombe and Carl Ren and Omkar Parkhi},
  year		={2023},
  eprint	={2308.13093},
  archivePrefix	={arXiv},
  primaryClass	={cs.CV}
}

If you use the dataset or tools on GitHub, please consider starring the repo.

By submitting your email and accessing the EgoBlur model, you agree to abide by the model license agreement and to receive emails in relation to the model.

Download EgoBlur Gen 1 and Gen 2

If you are a researcher in AI or ML research, access the EgoBlur models and accompanying tools here.

Frequently Asked Questions

Yes. Both face and license plate models are licensed under Apache 2.0, meaning the models are available for use for both research and industry applications.

Both the EgoBlur face and license plate models are approximately 400 MB and have ~104 million parameters.

The EgoBlur face model takes approximately 7 days to train on 4 machines with 8 NVIDIA V100 GPUs. The license plate model takes about a day to train on similar configurations.

No. Like other open source models, such as RetinaFace, the EgoBlur models are trained only to locate the position of faces and license plates of vehicles within color or greyscale images. The models are not used to track or identify individual faces or license plates.

Yes, in addition to egocentric data from Project Aria, the EgoBlur models are trained on non-egocentric data from the CCV2 dataset. The models should give a comparable performance to state-of the art models like RetinaFace for such non-egocentric data.

EgoBlur is based on the Faster RCNN model with a ResNext backbone. The models are trained using Meta’s publicly available Detectron2 and Detectron2go libraries.

Based on an RGB 2 MP image, both the EgoBlur face detection and license plate models run in approximately 0.5 seconds on a GPU, and 8 seconds on a CPU.

Based on an 0.3 MP greyscale image, both face detection and licence plate models run in approximately 0.3 seconds on a GPU, and 1.8 seconds on a CPU.

No, EgoBlur face and licence plate models are for detection only. The models output rectangular bounding boxes, not masks or labels.

Yes, the EgoBlur model works on images, videos and VRS files.

Please email projectaria@meta.com to report any bugs, or if you have any further queries about the EgoBlur models.

Subscribe to Project Aria Updates

Stay in the loop with the latest news from Project Aria.

By providing your email, you agree to receive marketing related electronic communications from Meta, including news, events, updates, and promotional emails related to Project Aria. You may withdraw your consent and unsubscribe from these at any time, for example, by clicking the unsubscribe link included on our emails. For more information about how Meta handles your data please read our Data Policy.