In general, there are multiple tasks in media forensic applications. For example, manipulation detection and localization, Generative Adversarial Network (GAN) detection, image splice detection and localization, event verification, camera verification, and provenance history analysis etc.
The OpenMFC initially focuses on manipulation detection and deepfake tasks. In future, challenges may be expanded with community interest. The OpenMFC 2022 has following three task categories: Manipulation Detection (MD), Deepfakes Detection (DD), and Steganography Detection (StegD).
A brief summary of each category and their tasks are described below. In the summary, the evaluation media is described in the following way: A ‘base’ indicates original media with high provenance, while a ‘probe’ indicates a test media. A ‘donor’ indicates another media whose content was donated into the base media and generated the probe media. For a full description of the evaluation tasks, please refer to the OpenMFC 2022 Evaluation Plan [Download Link].
The objective for Manipulation Detection (MD) is to detect if a probe has been manipulated, and if so, to spatially localize the edits. Manipulation in this context is defined as deliberate modifications of media (e.g., splicing and cloning etc.) and localization is encouraged but not required for OpenMFC.
The MD task includes three tasks, namely,
The Image Manipulation Detection task is to detect if the image has been manipulated, and then to spatially localize the manipulated region. For detection, the IMD system provides a confidence score for all probe (i.e., a test image) with higher numbers indicating the image is more likely to have been manipulated. The target probes (i.e., probes that should be detected as manipulated) included potentially any image manipulations while the non-target probes (i.e., probes not containing image manipulations) include only high provenance images that are known to be original. Systems are required to process and report a confidence score for every probe.
For the localization part of the task, the system provides an image bit-plane mask (either binary or greyscale) that indicates the manipulated pixels. Only local manipulations (e.g., clone) require a mask output while global manipulations (e.g., blur) affecting the entire image do not require a mask.
The new task, Image Splice Manipulation Detection, is added in the OpenMFC 2022 to support entry-level public participants. The ISMD is designed for 'splice' manipulation operation only. The testing dataset is a small-size dataset (2K images), which contains either original images without any manipulation, or spliced images. The ISMD task will detect if a probe image has been spliced.
The Video Manipulation Detection (VMD) task is to detect if the video has been manipulated. In this task, the localization of spatial/temporal-spatial manipulated regions is not addressed. For detection, the VMD system provides a confidence score for all probes (i.e, a test video) with higher numbers indicating the video is more likely to have been manipulated. For VMD, target probes (i.e., probes that should be detected as manipulated) included potentially any video manipulations while the non-target probes (i.e., probes not containing video manipulations) include only high provenance videos that are known to be original. Systems are required to process and report a confidence score for every probe.
With recent advances in DeepFakes techniques and GAN (Generative Adversarial Network), imagery producers are able to generate realistic fake objects in media. The objective for Deepfakes Detection (DD) is to detect if a probe has been Deepfakes or GAN manipulated.
The DD task includes two tasks based on testing media type, namely,
All probes must be processed independently of each other within a given task and across all tasks, meaning content extracted from probe data must not affect another probe.
For the OpenMFC 2022 evaluation, all tasks should run under the following conditions:
For the image tasks, the system is only allowed to use the pixel-based content for images as input to the system. No image header or other information should be used.
For the video tasks, the system is only allowed to use the pixel-based content for videos and audio (if audio exists) as input. No video header or other information should be used.
For detection performance assessment, system performance is measured by Area Under Curve (AUC) which is the primary metric and the Correct Detection Rate at a False Alarm Rate of 5% (CDR@FAR = 0.05) from the Receiver Operating Characteristic (ROC) as shown Figure (a) below. This applies to both image and video tasks.
For the image localization performance assessment, the Optimum Matthews Correlation Coefficient (MCC) is the primary metric. The optimum MCC is calculated using an ideal mask-specific threshold found by computing metric scores over all pixel thresholds. Figure (b) below shows a visualization of the different mask regions used for mask image evaluations.
Figure 1. Detection System Performance Metrics: Receiver Operating Characteristic (ROC) and Area Under the Curve (AUC)
Figure 2. Localization System Performance Metrics: Optimum Matthews Correlation Coefficient (MCC)
Figure 3. An Example of Localization System Evaluation Report
Registered participants will get access to datasets created by the DARPA Media Forensics (MediFor) Program [Website Link]. During the registration process, registrants will get the data access credentials.
There will be both development data sets (those which include reference material) and evaluation data sets (which consist of only probe images to test systems). Each data set is structured similarly as described on the “MFC Data Set Structure Summary” section below.
NIST OpenMFC dataset is designed and used for NIST OpenMFC evaluation. The datasets include the following items:
The index files are pipe-separated CSV formatted files. The index file for the Manipulation task will have the columns:
Date | Event |
---|---|
Nov. 14-15, 2023 | OpenMFC 2023 workshop |
Oct. 25, 2023 | OpenMFC 2023 submission deadline |
Dec. 6-7, 2022 | OpenMFC 2022 workshop |
Nov. 15, 2022 | OpenMFC 2022 submission deadline |
Aug. 1 - Aug. 30, 2022 | OpenMFC 2022 participant pre-challenge phase (QC testing) |
July 29, 2022 | OpenMFC STEG challenge dataset available |
July 28, 2022 | OpenMFC 2022 Leaderboard open for the next evaluation cycle |
Jul. 26, 2022 | (New) OpenMFC dataset resource website |
Mar. 03, 2022 | OpenMFC2022 Eval Plan available |
Feb. 15, 2022 | OpenMFC2021 Workshop Talks and Slides available |
Dec. 7- 10, 2021 | OpenMFC/TRECVID 2021 Virtual Workshop |
Nov. 1, 2021 | OpenMFC 2021 Virtual Workshop agenda finalization |
Oct. 30, 2021 | OpenMFC 2020-2021 submission deadline |
May 15, 2021 | OpenMFC 2020-2021 submission open |
April 23, 2021 - May 09, 2021 |
|
August 31, 2020 | OpenMFC evaluation GAN image and video dataset available |
August 21, 2020 | OpenMFC evaluation image and video dataset available |
August 17, 2020 | OpenMFC development datasets resource available |
RANK | SUBMISSION ID | SUBMISSION DATE | TEAM NAME | SYSTEM NAME | AUC | CDR@0.05FAR | ROC CURVE | AVERAGE OPTIMAL MCC |
---|---|---|---|---|---|---|---|---|
1 | 63 | 2021-06-07 11:26:58 | Mayachitra | test1june6 | 0.993707 | 0.972 | ||
2 | 10 | 2020-11-05 21:53:02 | UIIA | naive-efficient | 0.616186 | 0.071351 | ||
3 | 67 | 2021-06-08 00:51:16 | UIIA | testIMDL | 0.5 | 0.05 | ||
4 | 81 | 2021-06-26 00:31:16 | UIIA | testIMDL | 0.0553688699305928 |
RANK | SUBMISSION ID | SUBMISSION DATE | TEAM NAME | SYSTEM NAME | AUC | CDR@0.05FAR | ROC CURVE | AVERAGE OPTIMAL MCC |
---|
RANK | SUBMISSION ID | SUBMISSION DATE | TEAM NAME | SYSTEM NAME | AUC | CDR@0.05FAR | ROC CURVE | AVERAGE OPTIMAL MCC |
---|
RANK | SUBMISSION ID | SUBMISSION DATE | TEAM NAME | SYSTEM NAME | AUC | CDR@0.05FAR | ROC CURVE | AVERAGE OPTIMAL MCC |
---|
RANK | SUBMISSION ID | SUBMISSION DATE | TEAM NAME | SYSTEM NAME | AUC | CDR@0.05FAR | ROC CURVE | AVERAGE OPTIMAL MCC |
---|
RANK | SUBMISSION ID | SUBMISSION DATE | TEAM NAME | SYSTEM NAME | AUC | CDR@0.05FAR | ROC CURVE | AVERAGE OPTIMAL MCC |
---|
RANK | SUBMISSION ID | SUBMISSION DATE | TEAM NAME | SYSTEM NAME | AUC | CDR@0.05FAR | ROC CURVE | AVERAGE OPTIMAL MCC |
---|---|---|---|---|---|---|---|---|
1 | 90 | 2021-07-10 09:56:14 | UIIA | test | 0.689716 | 0.207018 | ||
2 | 93 | 2021-07-30 18:13:11 | UIIA | test | 0.683956 | 0.187135 | ||
3 | 138 | 2024-06-13 15:23:04 | DeepFake-Test | System-Test | 0.578435 | 0.059649 | ||
4 | 137 | 2024-06-13 15:22:57 | DeepFake-Test | System-Test | 0.578435 | 0.059649 | ||
5 | 136 | 2024-06-13 15:22:50 | DeepFake-Test | System-Test | 0.578435 | 0.059649 | ||
6 | 75 | 2021-06-14 04:12:30 | UBMDFL_IGMD | dry-run | 0.554261 | 0.012865 | ||
7 | 52 | 2021-06-01 15:41:20 | UBMDFL_IGMD | dry-run | 0.547125 | 0.009357 | ||
8 | 82 | 2021-06-23 09:21:26 | UIIA | ptchatt | 0.500033 | 0.051077 | ||
9 | 36 | 2021-05-06 07:32:03 | UIIA | test | 0.5 | 0.05 | ||
10 | 91 | 2021-07-21 15:53:47 | UIIA | test | 0.478445 | 0.009357 | ||
11 | 86 | 2021-07-07 06:59:01 | UIIA | test | 0.41193 | 0.00117 | ||
12 | 87 | 2021-07-07 07:04:14 | UIIA | test | 0.403957 | 0.004678 | ||
13 | 89 | 2021-07-08 04:20:47 | UIIA | test | 0.398674 | 0.003509 | ||
14 | 83 | 2021-06-23 11:08:50 | UIIA | ptchatt | 0.393886 | 0.025039 | ||
15 | 85 | 2021-07-05 16:16:58 | UIIA | test | 0.38569 | 0.0 | ||
16 | 84 | 2021-06-23 11:22:44 | UIIA | ptchatt | 0.377613 | 0.023884 | ||
17 | 78 | 2021-06-17 04:27:03 | UIIA | ptchatt | 0.366392 | 0.004678 | ||
18 | 92 | 2021-07-29 02:18:42 | UIIA | test | 0.363539 | 0.00117 | ||
19 | 88 | 2021-07-07 17:41:19 | UIIA | test | 0.359211 | 0.026901 |
RANK | SUBMISSION ID | SUBMISSION DATE | TEAM NAME | SYSTEM NAME | AUC | CDR@0.05FAR | ROC CURVE | AVERAGE OPTIMAL MCC |
---|---|---|---|---|---|---|---|---|
1 | 133 | 2024-06-13 11:44:37 | CERTH-ITI-MEVER | video_df_gan_detection_final | 0.817059 | 0.6 | ||
2 | 132 | 2024-06-13 11:23:09 | CERTH-ITI-MEVER | video_df_gan_detection | 0.813382 | 0.6 | ||
3 | 131 | 2022-11-15 16:45:18 | UTC 2022 | utcDDv1 | 0.512647 | 0.04 |
RANK | SUBMISSION ID | SUBMISSION DATE | TEAM NAME | SYSTEM NAME | AUC | CDR@0.05FAR | ROC CURVE | AVERAGE OPTIMAL MCC |
---|---|---|---|---|---|---|---|---|
1 | 129 | 2022-09-04 18:20:22 | UIIA | Testing | 0.525687 | 0.0175 | ||
2 | 128 | 2022-09-03 16:56:24 | UIIA | Testing | 0.524344 | 0.01 | ||
3 | 127 | 2022-09-02 17:01:31 | UIIA | Testing | 0.507594 | 0.0675 | ||
4 | 130 | 2022-09-05 04:51:33 | UIIA | Testing | 0.486156 | 0.05 |
Documenting each system is vital to interpreting evaluation results. As such, each submitted system, determined by unique submission identifiers, must be accompanied by Submission Identifier(s), System Description, OptOut Criteria, System Hardware Description and Runtime Computation, Training Data and Knowledge Sources, and References.
Using the <SubID> (a team-defined label for the system submission, witout spaces or special characters), all system output submissions must be formatted according to the following directory structure:
<SubID>/
<SubID>.txt The system description file, described in Appendix A-a
<SubID>.csv The system output file
/mask The system output mask directory
{MaskFileName1}.png The system output mask file directory
As an example, if the team is submitting <SubID> baseline_3, their directory would be:
baseline_3/
baseline_3.txt
baseline_3.csv
/mask
Next, build a zip or tar file of your submission and post the file on a web-accessible URL that does not require user/password credentials.
Make your submission using the OpenMFC Web site. To so, follow these steps: