In general, there are multiple tasks in media forensic applications. For example, manipulation detection and localization, Generative Adversarial Network (GAN) detection, image splice detection and localization, event verification, camera verification, and provenance history analysis etc.
The OpenMFC initially focuses on manipulation detection and deepfake tasks. In future, challenges may be expanded with community interest. The OpenMFC 2022 has following three task categories: Manipulation Detection (MD), Deepfakes Detection (DD), and Steganography Detection (StegD).
A brief summary of each category and their tasks are described below. In the summary, the evaluation media is described in the following way: A ‘base’ indicates original media with high provenance, while a ‘probe’ indicates a test media. A ‘donor’ indicates another media whose content was donated into the base media and generated the probe media. For a full description of the evaluation tasks, please refer to the OpenMFC 2022 Evaluation Plan [Download Link].
The objective for Manipulation Detection (MD) is to detect if a probe has been manipulated, and if so, to spatially localize the edits. Manipulation in this context is defined as deliberate modifications of media (e.g., splicing and cloning etc.) and localization is encouraged but not required for OpenMFC.
The MD task includes three tasks, namely,
The Image Manipulation Detection task is to detect if the image has been manipulated, and then to spatially localize the manipulated region. For detection, the IMD system provides a confidence score for all probe (i.e., a test image) with higher numbers indicating the image is more likely to have been manipulated. The target probes (i.e., probes that should be detected as manipulated) included potentially any image manipulations while the non-target probes (i.e., probes not containing image manipulations) include only high provenance images that are known to be original. Systems are required to process and report a confidence score for every probe.
For the localization part of the task, the system provides an image bit-plane mask (either binary or greyscale) that indicates the manipulated pixels. Only local manipulations (e.g., clone) require a mask output while global manipulations (e.g., blur) affecting the entire image do not require a mask.
The new task, Image Splice Manipulation Detection, is added in the OpenMFC 2022 to support entry-level public participants. The ISMD is designed for 'splice' manipulation operation only. The testing dataset is a small-size dataset (2K images), which contains either original images without any manipulation, or spliced images. The ISMD task will detect if a probe image has been spliced.
The Video Manipulation Detection (VMD) task is to detect if the video has been manipulated. In this task, the localization of spatial/temporal-spatial manipulated regions is not addressed. For detection, the VMD system provides a confidence score for all probes (i.e, a test video) with higher numbers indicating the video is more likely to have been manipulated. For VMD, target probes (i.e., probes that should be detected as manipulated) included potentially any video manipulations while the non-target probes (i.e., probes not containing video manipulations) include only high provenance videos that are known to be original. Systems are required to process and report a confidence score for every probe.
With recent advances in DeepFakes techniques and GAN (Generative Adversarial Network), imagery producers are able to generate realistic fake objects in media. The objective for Deepfakes Detection (DD) is to detect if a probe has been Deepfakes or GAN manipulated.
The DD task includes two tasks based on testing media type, namely,
All probes must be processed independently of each other within a given task and across all tasks, meaning content extracted from probe data must not affect another probe.
For the OpenMFC 2022 evaluation, all tasks should run under the following conditions:
For the image tasks, the system is only allowed to use the pixel-based content for images as input to the system. No image header or other information should be used.
For the video tasks, the system is only allowed to use the pixel-based content for videos and audio (if audio exists) as input. No video header or other information should be used.
For detection performance assessment, system performance is measured by Area Under Curve (AUC) which is the primary metric and the Correct Detection Rate at a False Alarm Rate of 5% (CDR@FAR = 0.05) from the Receiver Operating Characteristic (ROC) as shown Figure (a) below. This applies to both image and video tasks.
For the image localization performance assessment, the Optimum Matthews Correlation Coefficient (MCC) is the primary metric. The optimum MCC is calculated using an ideal mask-specific threshold found by computing metric scores over all pixel thresholds. Figure (b) below shows a visualization of the different mask regions used for mask image evaluations.
Figure 1. Detection System Performance Metrics: Receiver Operating Characteristic (ROC) and Area Under the Curve (AUC)
Figure 2. Localization System Performance Metrics: Optimum Matthews Correlation Coefficient (MCC)
Figure 3. An Example of Localization System Evaluation Report
Registered participants will get access to datasets created by the DARPA Media Forensics (MediFor) Program [Website Link]. During the registration process, registrants will get the data access credentials.
There will be both development data sets (those which include reference material) and evaluation data sets (which consist of only probe images to test systems). Each data set is structured similarly as described on the “MFC Data Set Structure Summary” section below.
NIST OpenMFC dataset is designed and used for NIST OpenMFC evaluation. The datasets include the following items:
The index files are pipe-separated CSV formatted files. The index file for the Manipulation task will have the columns:
Date | Event |
---|---|
Nov. 14-15, 2023 | OpenMFC 2023 workshop |
Oct. 25, 2023 | OpenMFC 2023 submission deadline |
Dec. 6-7, 2022 | OpenMFC 2022 workshop |
Nov. 15, 2022 | OpenMFC 2022 submission deadline |
Aug. 1 - Aug. 30, 2022 | OpenMFC 2022 participant pre-challenge phase (QC testing) |
July 29, 2022 | OpenMFC STEG challenge dataset available |
July 28, 2022 | OpenMFC 2022 Leaderboard open for the next evaluation cycle |
Jul. 26, 2022 | (New) OpenMFC dataset resource website |
Mar. 03, 2022 | OpenMFC2022 Eval Plan available |
Feb. 15, 2022 | OpenMFC2021 Workshop Talks and Slides available |
Dec. 7- 10, 2021 | OpenMFC/TRECVID 2021 Virtual Workshop |
Nov. 1, 2021 | OpenMFC 2021 Virtual Workshop agenda finalization |
Oct. 30, 2021 | OpenMFC 2020-2021 submission deadline |
May 15, 2021 | OpenMFC 2020-2021 submission open |
April 23, 2021 - May 09, 2021 |
|
August 31, 2020 | OpenMFC evaluation GAN image and video dataset available |
August 21, 2020 | OpenMFC evaluation image and video dataset available |
August 17, 2020 | OpenMFC development datasets resource available |
Documenting each system is vital to interpreting evaluation results. As such, each submitted system, determined by unique submission identifiers, must be accompanied by Submission Identifier(s), System Description, OptOut Criteria, System Hardware Description and Runtime Computation, Training Data and Knowledge Sources, and References.
Using the <SubID> (a team-defined label for the system submission, witout spaces or special characters), all system output submissions must be formatted according to the following directory structure:
<SubID>/
<SubID>.txt The system description file, described in Appendix A-a
<SubID>.csv The system output file
/mask The system output mask directory
{MaskFileName1}.png The system output mask file directory
As an example, if the team is submitting <SubID> baseline_3, their directory would be:
baseline_3/
baseline_3.txt
baseline_3.csv
/mask
Next, build a zip or tar file of your submission and post the file on a web-accessible URL that does not require user/password credentials.
Make your submission using the OpenMFC Web site. To so, follow these steps: