Important Dates

Timeline

ArtofRobust Workshop Schedule
Event Start time End time
Opening Remarks 8:50 9:00
Invited talk: Yang Liu 9:00 9:30
Invited talk: Quanshi Zhang 9:30 10:00
Invited talk: Baoyuan Wu 10:00 10:30
Invited talk: Aleksander Mądry 10:30 11:00
Invited talk: Bo Li 11:00 11:30
Poster Session (click) 11:30 12:30
lunch (12:30-13:30)
Oral Session 13:30 14:10
Challenge Session 14:10 14:30
Invited talk: Nicholas Carlini 14:30 15:00
Invited talk: Judy Hoffman 15:00 15:30
Invited talk: Alan Yuille 15:30 16:00
Invited talk: Ludwig Schmidt 16:00 16:30
Invited talk: Cihang Xie 16:30 17:00
Join our workshop (click)!!!
June 19, 2022 (9:00-17:30)
New Orleans time zone (UTC/GMT -5)

Call for Papers

Recently, the success of deep learning in AI has attracted great attention from academia. However, research shows that the security and safety of deep models have caused great concerns in real applications. This workshop focuses on discovering and harnessing both the positive and negative aspects of adversarial machine learning (especially in computer vision). We welcome research contributions related to the following (but not limited to) topics:
  • Adversarial attacks against computer vision tasks
  • Improve the robustness of deep learning systems
  • Interpreting and understanding model robustness
  • Adversarial attacks for social good

Paper Submission

Format: Submissions papers (.pdf format) must use the CVPR 2022 Author Kit for LaTeX/Word Zip file and be anonymized and follow CVPR 2022 author instructions. The workshop considers two types of submissions: (1) Long Paper: Papers are limited to 6 pages excluding references; (2) Extended Abstract: Papers are limited to 4 pages including references.
The excellent papers will be invited to the Special Issue of Pattern Recognition journal for publication consideration(TBD).

Submission Site: https://cmt3.research.microsoft.com/artofrobust2022/
Submission Due: !!!Time Delay March 22, 2022, Anywhere on Earth (AoE)


Accepted Long Paper

  • Privacy Leakage of Adversarial Training Models in Federated Learning Systems [Paper] oral presentation
    Jingyang Zhang (Duke University)*; Yiran Chen (Duke University); Hai Li (Duke University)
  • Increasing Confidence in Adversarial Robustness Evaluations [Paper] oral presentation
    Roland S. Zimmermann (University of Tuebingen)*; Wieland Brendel (University of Tübingen); Florian Tramer (Google); Nicholas Carlini (Google)
  • The Risk and Opportunity of Adversarial Example in Military Field [Paper]
    Yuwei Chen (Chinese Aeronautical Establishment)*
  • Towards Comprehensive Testing on the Robustness of Cooperative Multi-agent Reinforcement Learning [Paper]
    Jun Guo (Beihang University)*; Yonghong Chen (Yangzhou Collaborative Innovation Research Institute CO., LTD); Yihang Hao (Yangzhou Collaborative Innovation Research Institute CO., LTD); Zixin Yin (Beihang University); Yin Yu (No. 38 Research Institute of CETC, Hefei 230088, China); Simin Li (Beihang University)
  • Robustness and Adaptation to Hidden Factors of Variation [Paper]
    William Paul (JHU/APL)*; Philippe Burlina (JHU/APL/CS/SOM)
  • PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos [Paper]
    Nupur Thakur (Arizona State University)*; baoxin Li (Arizona State University)
  • Adversarial Robustness through the Lens of Convolutional Filters [Paper]
    Paul Gavrikov (Offenburg University)*; Janis Keuper (Offenburg University)
  • Strengthening the Transferability of Adversarial Examples Using Advanced Looking Ahead and Self-CutMix [Paper][Supp]
    Donggon Jang (KAIST)*; Sanghyeok Son (KAIST); Daeshik Kim (KAIST)
  • AugLy: Data Augmentations for Adversarial Robustness [Paper]
    Zoë Papakipos (Meta AI)*; Joanna Bitton (Facebook AI)
  • RODD: A Self-Supervised Approach for Robust Out-of-Distribution Detection [Paper]
    Umar Khalid (University of central florida)*; Ashkan Esmaeili (University of Central Florida); Nazmul Karim (University of Central Florida); Nazanin Rahnavard (University of Central Florida)
  • An Empirical study of Data-Free Quantization's Tuning Robustness [Paper]
    Hong Chen (Beihang University); Yuxan Wen (Beihang University); Yifu Ding (Beihang University); Zhen Yang (Shanghai Aerospace Electronic Technology Institute); Yufei Guo (The Second Academy of China Aerospace Science and Industry Corporation); Haotong Qin (Beihang University)*
  • Exploring Robustness Connection between Artificial and Natural Adversarial Examples [Paper]
    Akshay Agarwal (University at Buffalo)*; Nalini Ratha (SUNY Buffalo); Mayank Vatsa (IIT Jodhpur); Richa Singh (IIT Jodhpur) 0
  • Generalizing Adversarial Explanations with Grad-CAM [Paper]
    Tanmay Chakraborty (EURECOM)*; Utkarsh Trehan (EURECOM); Khawla Mallat (Eurecom); Jean-Luc Dugelay (France)
  • CorrGAN:Input Transformation Technique Against Natural Corruptions [Paper]
    Mirazul Haque (University of Texas at Dallas)*; Christof J Budnik (Siemens); Wei Yang (University of Texas at Dallas)
  • Poisons that are learned faster are more effective [Paper]
    Pedro Sandoval-Segura (University of Maryland at College Park)*; Vasu Singla (University Of Maryland); Liam Fowl (University of Maryland); Jonas Geiping (University of Maryland); Micah Goldblum (New York University); David Jacobs (University of Maryland); Tom Goldstein (University of Maryland, College Park)
  • Adversarial Machine Learning Attacks Against Video Anomaly Detection Systems [Paper]
    Furkan Mumcu (University of South Florida); Keval Doshi (University of South Florida)*; Yasin Yilmaz (University of South Florida)

Accepted Extended Abstract

  • Gradient Obfuscation Checklist Test Gives a False Sense of Security [Paper] oral presentation
    Nikola Popović (ETH Zürich)*; Danda Pani Paudel (ETH Zürich); Thomas Probst (ETH Zurich); Luc Van Gool (ETH Zurich)
  • Test-time Adaptation of Residual Blocks against Poisoning and Backdoor Attacks [Paper][Supp] oral presentation
    Arnav Gudibande (UC Berkeley)*; Xinyun Chen (UC Berkeley); Yang Bai (Tsinghua); Jason Xiong (University of California, Berkeley); Dawn Song (UC Berkeley)
  • Understanding CLIP Robustness [Paper]
    Yuri Galindo (Federal University of Sao Paulo)*; Fabio Faria (Federal University of Sao Paulo)
  • On Fragile Features and Batch Normalization in Adversarial Training [Paper]
    Nils Philipp Walter (Max Planck Institute for Informatics)*; David Stutz (Max Planck Institute for Informatics); Bernt Schiele (MPI Informatics)
  • Sparse Visual Counterfactual Explanations in Image Space [Paper]
    Valentyn Boreiko (University of Tübingen)*; Maximilian Augustin (University of Tuebingen); Francesco Croce (University of Tübingen); Philipp Berens (University of Tübingen); Matthias Hein (University of Tübingen)
  • Efficient and Effective Augmentation Strategy for Adversarial Training [Paper]
    Sravanti Addepalli (Indian Institute of Science)*; Samyak Jain (Indian Institute of Technology (BHU), Varanasi); Venkatesh Babu RADHAKRISHNAN (Indian Institute of Science)
  • Towards Data-Free Model Stealing in a Hard Label Setting [Paper]
    Sunandini Sanyal (Indian Institute of Science, Bengaluru)*; Sravanti Addepalli (Indian Institute of Science); Venkatesh Babu RADHAKRISHNAN (Indian Institute of Science)
  • Transferability of ImageNet Robustness to Downstream Tasks [Paper]
    Yutaro Yamada (Yale University)*; Mayu Otani (CyberAgent)

Sponsors