The first OpenMFC workshop (OpenMFC 2021) is in conjunction with TRECVID 2021. The TRECVID/OpenMFC workshop is a virtual event organized by teams in the Information Access Division (IAD), Information Technology Laboratory (ITL), National Institute of Standards and Technology (NIST) on Dec. 7-10, 2021.
The rapid advance of artificial intelligence (AI) has led to several emerging technologies such as deepfakes, CGI, provenance, and anti-forensics techniques, which significantly threaten the trustworthiness of media content. To detect inadvertent misinformation or deliberate deception via disinformation and ensure digital content trust and authentication, the NIST OpenMFC team provides public researchers with a comprehensive evaluation platform to develop technologies to detect unauthentic imagery and retrieve the digital content provenance. OpenMFC designed a set of evaluation tasks and released a series of media forensics datasets to support the OpenMFC evaluation as well as other related research work. The evaluation is being conducted to examine the performance of the system’s accuracy and robustness over diverse datasets. The participants can visualize their system performance on an online leaderboard evaluation platform. NIST will give an overview of the OpenMFC results in the current year and plan for OpenMFC evaluation for the next year.
All times/dates below are in Eastern Standard Time (EST).
|Workshop Date||Dec. 7 -10, 2021|
|Slide Submission Deadline||Dec. 31, 2021|
|Website Update Notification to Authors||Jan. 31, 2022|
|Website Finalization||Feb. 28, 2022|
The OpenMFC 2021 Workshop will be in conjunction with TRECVID 2021 this year. The TRECVID 2021 workshop (registration link , website link) will be held in the first section starting at 7:00 am EDT and the OpenMFC workshop will be held in the second section ending no later than 2:00 pm EDT. Please refer to the TRECVID 2021 agenda and the OpenMFC 2021 agenda for detailed information. We would like to invite you to both workshops with a single registration process.
We solicit presentations and/or papers for the OpenMFC 2021. The presentations are invited only. If you like to give a presentation about your insightful vision or excited achievements in the media forensics field to the OpenMFC participants and the public researchers in this workshop, please contact us. The peer reviewed and accepted papers will be planned to publish in the OpenMFC website.
Open Media Forensics Challenge Evaluation (OpenMFC) (https://mfc.nist.gov/) is an open evaluation series organized by the National Institute of Standards and Technology (NIST) to assess and measure the capability of media forensic technology. Our primary goal is to support public researchers with benchmark datasets and a web-based leaderboard platform to promote media forensics research worldwide.
This workshop is organized under the OpenMFC program, which aims to bring all the stakeholders in the media forensics field together to help advance the technologies in the media manipulation detection, GAN/Deepfakes detection, steganography image detection and analysis, etc.
We are inviting the OpenMFC participants and the public research community to give a presentation and/or submit research papers on the following topics:
Although the workshop is designed for active participant team members in the OpenMFC program, the workshop is open to the public. We do highly encourage any researcher in academia, industry, or government to attend if they wish.
Please register for the TRECVID 2021 and OpenMFC 2021 workshops using the external link (a single registration for two workshops):OpenMFC/TRECVID Register
Abstract: Detecting images that have been digitally manipulated is a challenge. We have to rely on methods that specifically target certain manipulations, as well as other statistical and holistic methods that do not target a specific type of manipulation. This talk will address both these categories and show how these methods can be utilized in detecting doctored images.
Speaker Profile: Dr. Lakshmanan Nataraj is a Principal R&D Engineer at Trimble Inc., in Chennai, India, working in the areas of Computer Vision and Artificial Intelligence. Prior to joining Trimble, Dr. Nataraj was a Senior Research Scientist at Mayachitra Inc., Santa Barbara, where he led R&D projects in Media Forensics and Cyber Security. He was a Co-Principal Investigator (Co-PI) in DARPA's MediFor (MediFor) program as part of Mayachitra's team. During the course of the MediFor program, Dr. Nataraj co-authored several breakthrough publications in Media Forensics and led the team in obtaining top scores during the annual NIST Media Forensic Challenge (MFC) evaluations. Prior to joining Mayachitra, Dr. Nataraj obtained his Ph.D. from the University of California, Santa Barbara in 2015.
Abstract: High quality AI-created digital impersonations, known as Deepfakes, have become a serious problem from 2017, which could irreparably damage public trust in video content. Several publicly available generation tools and datasets have been proposed to promote a variety of detection methods. Beyond detection, it is also important to determine the specific generation model for a fake video, which can help attribute it to the source for forensic investigation. Dr. Jia will focus on this topic by investigating if and to what extent the manipulation tools will make differences in Deepfake videos.
Speaker Profile: Dr. Shan Jia is currently a postdoctoral researcher at the Department of Computer Science and Engineering of University at Buffalo, State University of New York, working with Professor Siwei Lyu. Before joining UB, she was a visiting scholar at West Virginia University, working with Professor Xin Li and Guodong Guo. She received her Ph.D. degree in Communication and Information System from Wuhan University in 2021. Dr. Jia’s research areas are mainly focused on multimedia forensics, biometrics, and computer vision.
Abstract: Recent years have witnessed an unexpected and astonishing rise of AI-synthesized fake media, thanks to the rapid advancement of technology and the omnipresence of social media. Together with other forms of online disinformation, the AI-synthesized fake media are eroding our trust in online information and have already caused real damage. It is thus important to develop countermeasures to limit the negative impacts of AI-synthesized fake media. In this presentation, Dr. Lyu will highlight recent technical developments to fight AI-synthesized fake media, and discuss the future of AI-synthesized fake media and their counter technology.
Speaker Profile: Siwei Lyu is an Empire Innovation Professor at the Department of Computer Science and Engineering and the founding Director of UB Media Forensic Lab (UB MDFL) of the University at Buffalo, State University of New York. Before joining UB, Dr. Lyu was an Assistant Professor from 2008 to 2014, a tenured Associate Professor from 2014 to 2019, and a Full Professor from 2019 to 2020, at the Department of Computer Science, University at Albany, State University of New York. From 2005 to 2008, he was a Post-Doctoral Research Associate at the Howard Hughes Medical Institute and the Center for Neural Science of New York University. He was an Assistant Researcher at Microsoft Research Asia (then Microsoft Research China) in 2001. Dr. Lyu received his Ph.D. degree in Computer Science from Dartmouth College in 2005, and his M.S. degree in Computer Science in 2000, and B.S. degree in Information Science in 1997, both from Peking University, China. Dr. Lyu's research interests include digital media forensics, computer vision, and machine learning. Dr. Lyu has published over 150 refereed journal and conference papers. He is the recipient of the IEEE Signal Processing Society Best Paper Award (2011), the National Science Foundation CAREER Award (2010), SUNY Albany's Presidential Award for Excellence in Research and Creative Activities (2017), SUNY Chancellor's Award for Excellence in Research and Creative Activities (2018) and Google Faculty Research Award (2019).
Speaker Profile: Wendy Dinova-Wimmer, Sr. Digital Media Architect. Wendy works in Adobe’s Office of the Public Sector CTO Office supporting government customers with all things digital media. Prior to joining Adobe, Wendy spent thirty years in the government, first as a graphic designer and then visualizing scientific analysis with 3D animation. After 9/11, Wendy dove into multimedia forensic analysis specializing in media authentication. Wendy contributes to forensic image standards with the Organization of Scientific Area Committees for Forensic Sciences (OSAC) and ASTM standards organization. Wendy works as a Trusted Advisor with the Content Authenticity Initiative (CAI).
Speaker Profile: Dr. Matthew C. Stamm is an Associate Professor in the Department of Electrical and Computer Engineering at Drexel University. He leads the Multimedia and Information Security Lab.
Dr. Stamm's research focuses on an emerging area of information security known as information forensics. Additionally, he develops and studies anti-forensic countermeasures that an information attacker can use to disguise their forgeries. His research has been funded by the National Science Foundation (NSF), the Defense Advanced Research Projects Agency (DARPA), the Army Research Office (ARO), and the Defense Forensics and Biometrics Agency (DFBA).
Dr. Stamm is the the recipient of a 2016 National Science Foundation CAREER Award and the 2017 Drexel University College of Engineering Outstanding Early-Career Research Achievement Award. He was the General Chair of the 2017 ACM Workshop on Information Hiding and Multimedia Security (IH&MMSec) and is the lead organizer of the IEEE Signal Processing Society’s 2018 Signal Processing Cup competition. He serves as an elected member of the Information Forensics and Security Technical Committee of the IEEE Signal Processing Society, as a member of the Editorial Board of SigPort (the IEEE Signal Processing Society’s online repository of manuscripts, technical white papers, databases, and supporting materials), and regularly serves as a reviewer or technical program committee member of several major journals and conferences in signal processing and multimedia security.
Dr. Stamm earned his B.S., M.S., and Ph.D. degrees from the University of Maryland, College Park. Dr. Stamm was named the first place winner of the Dean's Doctoral Research Award from the A. James Clark School of Engineering. While at the University of Maryland, he was also the recipient of the Ann G. Wylie Dissertation Fellowship and a Clark School of Engineering Future Faculty Fellowship, and a Distinguished Teaching Assistant Award. Prior to beginning his graduate studies, he worked as an engineer at the Johns Hopkins University Applied Physics Lab.
Abstract: In this talk, I will briefly review the recent development of deep generative models and their applications to deepfake generation. In addition, I will also cover the recent development of countermeasures to the deepfake, including various detection algorithms, digital watermark, and adversarial perturbation. Finally, I will also introduce the recent works done by my group for image forgery detection and adversarial attack and defense for object detection to conclude the talk.
Speaker Profile: Jun-Cheng Chen is an assistant research fellow at the Research Center for Information Technology Innovation, Academia Sinica. He received the Ph.D. degree advised by Prof. Rama Chellappa in Computer Science from University of Maryland, College Park, USA, in 2016. From 2017 to 2019, he was a postdoctoral research fellow at University of Maryland Institute for Advanced Computer Studies. His research interests include computer vision, machine learning, deep learning and their applications to biometrics, such as face recognition/facial analytics, activity recognition/detection in the visual surveillance domain, etc. He was a recipient of the ACM Multimedia Best Technical Full Paper Award in 2006.
Abstract: Humans have sent secret messages for millennia. A cousin to cryptography, steganography is the art and science of sending a secret message in the open by camouflaging the message carefully. Steganography can take many shapes, and its digital form often uses a digital image or video as a cover to hide the message. With a smartphone app, image steganography is easy to use, requires no expert knowledge of the science, and can be difficult to detect. To study mobile steganography properly, one must have a suitable database. This talk presents StegoAppDB, a database of digital photographs expressly created for studying mobile steganography, that will be used in NIST’s Open Media Forensic Challenge.
Speaker Profile: Prof. Jennifer L. Newman is Scott Hanna Faculty Fellow in Mathematics, Department of Mathematics Iowa State University. Her general research interests are applications of solutions to image processing problems using discrete mathematics. Her current research interest is in the area of forensic steganalysis, developing a standardized steganalysis dataset. Her past research interests have included the use of image algebra, genetic algorithms, artificial neural networks, stochastic processes, and optimization algorithms in areas such as image texture modeling for synthesis and classification; and image analysis - boundary detection, object recognition, and creating steganalysis feature sets. Her work is multidisciplinary and covers a broad range of topics from computer science, electrical engineering, mathematics, statistics, and machine learning techniques. She have taught many courses in mathematics, signal processing, image processing, digital image forensics, and related areas.
Speaker Profile: Dr. Li Lin received his B.S. degree in Mathematics from Capital Normal University, Beijing, China, and his Ph.D degree in Applied Mathematics at Iowa State University. He has been working as a Postdoctoral Research Associate at the Center for Statistics and Applications in Forensic Evidence (CSAFE) for the stegoAppDB project. His other research interests include statistical image forensics, steganalysis, and statistical learning.