Yuting Yang

📧 yuting.yang@childrens.harvard.edu

I'm currently a postdoctoral research fellow in the Computational Health Informatics Program (CHIP), Boston Children’s Hospital, Harvard Medical School (HMS), under the supervision of Prof. Timothy Miller and Prof. William La Cava. I received PhD at the University of Chinese Academy of Sciences (UCAS) supervised by Prof. Juan Cao and Prof. Jintao Li. I had a one-year visiting study in NExT++, National University of Singapore (NUS), supervised by Prof. Tat-Seng Chua.

During PhD, my research's mainly about Trustworthy AI and Dialogue System. Recently, I am intrigued by the diverse applications of foundation models across various fields (e.g., clinic) and I'm working on multi-modal foundation models for medical records.

Curriculum Vitae    /    Google Scholar       

Education
2017-2023 University of Chinese Academy of Sciences (UCAS)
Ph.D. in Computer Science and Technology
Supervisor: Jintao Li
2021-2022 National University of Singapore (NUS)
Visiting Research Scholar in NExT++ Center
Supervisor: Tat-Seng Chua
2013-2017 Jilin University (JLU)
B.S. in Computer Science and Technology, School of Computer Science and Technology
Selected Publications
PAD: A Robustness Enhancement Ensemble Method via Promoting Attention Diversity
Yuting Yang, Pei Huang, Feifei Ma, Juan Cao and Jintao Li.
LREC-COLING 2024
Proposed attention-based diversity to promote model diversity in ensemble, which consistently improves the general robustness against various types of adversarial perturbation and presents good interpretability.
Towards Efficient Verification of Quantized Neural Networks
Pei Huang, Haoze Wu, Yuting Yang, Ieva Daukantas, Min Wu, Yedi Zhang and Clark Barrett
AAAI 2024
Proposed an efficient framework for formally verifying quantized neural networks.
A Prompt-based Approach to Adversarial Example Generation and Robustness Enhancement
Yuting Yang, Pei Huang, Juan Cao, Jintao Li, Yun Lin and Feifei Ma
Frontier of Computer Sciences 2024
Among first to propose that prompt can be maliciously constructed to arise robustness issues of pre-trained language models, which later became a popular research topic (LLM alignment) with the popularity of large models and prompt learning.
A Dual Prompt Learning Framework for Few-Shot Dialogue State Tracking
Yuting Yang, Wenqiang Lei, Pei Huang, Juan Cao, Jintao Li and Tat-Seng Chua
WWW 2023
To improve generation ability of pre-trained language models in few-shot scenerios, designed a dual prompt learning framework.
Quantifying Robustness to Adversarial Word Substitutions
Yuting Yang, Pei Huang, Feifei Ma, Juan Cao, Jintao Li and Jian Zhang
ECML-PKDD 2023
Proposed “weak robustness” to evaluate DNN’s capability of resisting perturbation and establish the concept of “sufficiently robust”. Proposed a formal framework to evaluate word-level robustness of NLP models including estimating bounds for robust regions and quantifying robustness outside robust regions.
Word Level Robustness Enhancement: Fight Perturbation with Perturbation
Pei Huang*, Yuting Yang*, Fuqi Jia, Minghao Liu, Feifei Ma, and Jian Zhang (*Co-First Author)
AAAI 2022
Proposed a model-agnostic method for enhancing the word-level robustness of deep NLP models. Via input perturbation, the method can significantly decrease the rate of successful perturbation and maintain generalization to a great extent.
𝜀-weakened Robustness of Deep Neural Networks
Pei Huang*, Yuting Yang*, Minghao Liu, Fuqi Jia, Feifei Ma, and Jian Zhang (*Co-First Author)
ISSTA 2022
Introduced a notion of ε-weakened robustness for analyzing the reliability and some related quality issues of deep neural networks.
Teaching Experience
2018 Spring Teaching Assistant, Multimedia Technology, UCAS
Academic Activities
Reviewer/PC member AAAI 2022-2023, ACL Rolling Review (2021-2022), WWW 2022, EAAI 2022, KDD 2023
Conference Volunteer ICDM 2019
Honors and Awards
2020 Merit Student, UCAS
2017-2022 Academic Scholarship, UCAS
2014-2015 National Undergraduate Scholarship, Ministry of Education of China