A Prompt-based Approach to Adversarial Example Generation and Robustness Enhancement
Yuting Yang, Pei Huang, Juan Cao, Jintao Li, Yun Lin and Feifei Ma
Frontier of Computer Sciences 2024
Among first to propose that prompt can be maliciously constructed to arise robustness issues of pre-trained language models, which later became a popular research topic (LLM alignment) with the popularity of large models and prompt learning.
|
Quantifying Robustness to Adversarial Word Substitutions
Yuting Yang, Pei Huang, Feifei Ma, Juan Cao, Jintao Li and Jian Zhang
ECML-PKDD 2023
Proposed “weak robustness” to evaluate DNN’s capability of resisting perturbation and establish the concept of “sufficiently robust”. Proposed a formal framework to evaluate word-level robustness of NLP models including estimating bounds for robust regions and quantifying robustness outside robust regions.
|
Word Level Robustness Enhancement: Fight Perturbation with Perturbation
Pei Huang*, Yuting Yang*, Fuqi Jia, Minghao Liu, Feifei Ma, and Jian Zhang (*Co-First Author)
AAAI 2022
Proposed a model-agnostic method for enhancing the word-level robustness of deep NLP models. Via input perturbation, the method can significantly decrease the rate of successful perturbation and maintain generalization to a great extent.
|
𝜀-weakened Robustness of Deep Neural Networks
Pei Huang*, Yuting Yang*, Minghao Liu, Fuqi Jia, Feifei Ma, and Jian Zhang (*Co-First Author)
ISSTA 2022
Introduced a notion of ε-weakened robustness for analyzing the reliability and some related quality issues of deep neural networks.
|
2018 Spring
|
Teaching Assistant,
Multimedia Technology, UCAS
|
Reviewer/PC member
|
AAAI 2022-2023, ACL Rolling Review (2021-2022), WWW 2022, EAAI 2022, KDD 2023
|
Conference Volunteer
|
ICDM 2019
|
2020
|
Merit Student, UCAS
|
2017-2022
|
Academic Scholarship, UCAS
|
2014-2015
|
National Undergraduate Scholarship, Ministry of Education of China
|
|