Jiakai Wang (王嘉凯)

I am now a Research Scientist in ZGC Lab, Beijing, China. I received the Ph.D. degree in 2022 from Beihang University (Summa Cum Laude), supervised by Prof. Wei Li and Prof. Xianglong Liu. Before that, I obtained my BSc degree in 2018 from Beihang University (Summa Cum Laude).

Email: jk_buss_scse@buaa.edu.cn

Github / Linkedin / Google Scholar /

Research

My research interest is Trustworthy AI in Computer Vision, which consists of the physical adversarial examples generation, adversarial defense and evaluation. I hold the review that physical adversarial attacks and defenses can powerfully promote the development of secure and robust artificial intelligence, leading to a healthier future society.

Now my research mainly includes:
  • Adversarial examples generation
  • Defend adversarial attacks in the physical world
  • 3D adversarial attack
  • Model robustness evaluation and testing
News

[2022.08] One paper accepted by ACM CCS 2022.

[2022.07] One paper accepted by ACM MM 2022.

⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐[2022.06]I got my Ph.D. degree!!!!!!⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐

[2022.03] One paper accepted by CVPR 2022.

Selected Publication
MM2021

Generating Transferable Adversarial Examples against Vision Transformers
Yuxuan Wang, Jiakai*, Xiao Bai, Xianglong Liu (* indicates corresponding author)

ACM Multimedia (MM), 2022
pdf / Project page

We propose a architecture-oritend transferable attack that could adverasially attack various VisionTransformers.

CVPR 2022

Defensive Patches for Robust Recognition in the Physical World
Jiakai Wang, Zixin Yin, Pengfei Hu, Renshuai Tao, Haotong Qin, Xianglong Liu, Dacheng Tao, Aishan Liu.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022
pdf / Project page

We generate defensive patches to help building robust recognition systems in practice against noises by simply sticking them on the target object.

TIP2021

Universal Adversarial Patch Attack for Automatic Checkout using Perceptual and Attentional Bias
Jiakai Wang*, Aishan Liu*, Xiao Bai, Xianglong Liu

IEEE Transactions on Image Processing (TIP), 2021 (IF=10.86)
pdf / Project page

We propose a bias-based framework to generate universal adversarial patches with strong generalization ability, which exploits the perceptual bias and attentional bias to improve the attacking ability.

PontTuset

Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World
Jiakai Wang, Aishan Liu, Zixin Yin, Shunchang Liu, Shiyu Tang, Xianglong Liu.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021 (Oral)
pdf / News: (机器之心) /Project page

We propose the Dual Attention Suppression (DAS) attack to generate visually-natural physical adversarial camouflages with strong transferability by suppressing both model and human attention.

PontTuset

Bias-based Universal Adversarial Patch Attack for Automatic Check-out
Aishan Liu*, Jiakai Wang*, Xianglong Liu, Bowen Cao, Chongzhi Zhang, Hang Yu.
European Conference on Computer Vision (ECCV), 2020
pdf / News: (新智元) /Project page

We propose a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability, which exploits both the perceptual and semantic bias of models.

AIView

人工智能机器学习模型及系统的质量要素和测试方法
王嘉凯, 刘艾杉, 刘祥龙
信息技术与标准化, 2020
pdf

Highlight Project
PontTuset

重明 (AISafety)

pdf / (News: TechWeb) / Project page

重明 is an open-source platform to evaluate model robustness and safety towards noises (e.g., adversarial examples, corruptions, etc.). The name is taken from the Chinese myth 重明鸟, which has strong power, could fight against beasts and avoid disasters. We hope our platform could improve the robustness of deep learning systems and help them to avoid safety-related problems. 重明 has been awarded the 首届OpenI启智社区优秀开源项目 (First OpenI Excellent Open Source Project).

PontTuset

RobustART

pdf / (News: 机器之心) / Project page

RobustART is the first comprehensive Robustness investigation benchmark on large-scale dataset ImageNet regarding ARchitectural design (49 human-designed off-the-shelf architectures and 1200+ neural architecture searched networks) and Training techniques (10+ general ones e.g., extra training data, etc) towards diverse noises (adversarial, natural, and system noises). Our benchmark (including open-source toolkit, pre-trained model zoo, datasets, and analyses): (1) presents an open-source platform for conducting comprehensive evaluation on diverse robustness types; (2) provides a variety of pre-trained models with different training techniques to facilitate robustness evaluation; (3) proposes a new view to better understand the mechanism towards designing robust DNN architectures, backed up by the analysis. We will continuously contribute to building this ecosystem for the community.

Academic Services

[2022]Invited as NeurIPS, ACM MM, ECCV, CVPR, Reviewer.

[2022] Co-organizer of the Workshop on International Workshop on The Art of Robustness: Devil and Angel in Adversarial Machine Learning at CVPR 2022.

[2021]Invited as ACM MM, Pattern Recognition, IET Image Processing Reviewer.

[2021] Co-organizer of the Forum on Safety and Privacy for Multimedia Systems at ChinaMM 2021.

Main Awards

[2022.06]    Outstanding Graduates of Beijing Province

[2022.01]    Beihang University Exploring Scholarship.

[2021.10]    Beihang University Guorui Scholarship.

[2021.10]    Beihang University Merit Student.

[2021.06]    Beihang University First Prize Scholarship.

[2021.06]    Beihang University Excellent Academic Paper Award.

[2020.10]    Beihang University First Prize Scholarship.

[2020.09]    China National Scholarship (Top2%).

[2020.09]    Beihang University Merit Student.

[2019.10]    Beihang University First Prize Scholarship.

[2018.09]    Beihang University Outstanding Freshman Scholarship (1/12).

[2018.06]    Outstanding Graduates of Beijing Province.