Yuting Yang, PhD

📧 yuting.yang@childrens.harvard.edu

I'm a postdoctoral research fellow in the Computational Health Informatics Program, Boston Children’s Hospital (BCH), Harvard Medical School (HMS), under the guidance of Timothy Miller, PhD and William La Cava, PhD. My current research focuses on the development of foundation models in health. Specifically, I am working on multi-modal foundation models for cardiology medical records. I'm also affiliated with the Congenital Heart Artificial Intelligence (CHAI) Lab, BCH, where I work closely with Joshua Mayourian, MD, PhD on buliding AI tools specific for pediatric and congenital heart care.

Harvard Website    /    Google Scholar       

Education
2017-2023 University of Chinese Academy of Sciences (UCAS)
PhD in Computer Science and Technology
Trustworthy AI, Dialogue System
Supervisor: Jintao Li and Juan Cao
2021-2022 National University of Singapore (NUS)
Visiting Research Scholar in NExT++ Research Center
Supervisor: Tat-Seng Chua
2013-2017 Jilin University (JLU)
BS in Computer Science and Technology
School of Computer Science and Technology
Selected Publications
Interpreting Deep Neural Networks via Relative Activation-Deactivation Abstractions
Zhen Zhang, Peng Wu, Yuting Yang and Xuran Li.
TOSEM 2025
Proposed a relative activation–deactivation abstraction approach to characterize the decision logic of a deep learning model.
PAD: A Robustness Enhancement Ensemble Method via Promoting Attention Diversity
Yuting Yang, Pei Huang, Feifei Ma, Juan Cao and Jintao Li.
LREC-COLING 2024
Proposed attention-based diversity to promote model diversity in ensemble, which consistently improves the general robustness against various types of adversarial perturbation and presents good interpretability.
Towards Efficient Verification of Quantized Neural Networks
Pei Huang, Haoze Wu, Yuting Yang, Ieva Daukantas, Min Wu, Yedi Zhang and Clark Barrett
AAAI 2024
Proposed an efficient framework for formally verifying quantized neural networks.
A Prompt-based Approach to Adversarial Example Generation and Robustness Enhancement
Yuting Yang, Pei Huang, Juan Cao, Jintao Li, Yun Lin and Feifei Ma
Frontier of Computer Sciences 2024
Among first to propose that prompt can be maliciously constructed to arise robustness issues of pre-trained language models, which later became a popular research topic (LLM alignment) with the popularity of large models and prompt learning.
A Dual Prompt Learning Framework for Few-Shot Dialogue State Tracking
Yuting Yang, Wenqiang Lei, Pei Huang, Juan Cao, Jintao Li and Tat-Seng Chua
WWW 2023
To improve generation ability of pre-trained language models in few-shot scenerios, designed a dual prompt learning framework.
Quantifying Robustness to Adversarial Word Substitutions
Yuting Yang, Pei Huang, Feifei Ma, Juan Cao, Jintao Li and Jian Zhang
ECML-PKDD 2023
Proposed “weak robustness” to evaluate DNN’s capability of resisting perturbation and establish the concept of “sufficiently robust”. Proposed a formal framework to evaluate word-level robustness of NLP models including estimating bounds for robust regions and quantifying robustness outside robust regions.
Word Level Robustness Enhancement: Fight Perturbation with Perturbation
Pei Huang*, Yuting Yang*, Fuqi Jia, Minghao Liu, Feifei Ma, and Jian Zhang (*Co-First Author)
AAAI 2022
Proposed a model-agnostic method for enhancing the word-level robustness of deep NLP models. Via input perturbation, the method can significantly decrease the rate of successful perturbation and maintain generalization to a great extent.
𝜀-weakened Robustness of Deep Neural Networks
Pei Huang*, Yuting Yang*, Minghao Liu, Fuqi Jia, Feifei Ma, and Jian Zhang (*Co-First Author)
ISSTA 2022
Introduced a notion of ε-weakened robustness for analyzing the reliability and some related quality issues of deep neural networks.
Teaching Experience
2018 Spring Teaching Assistant, Multimedia Technology, UCAS
Academic Activities
Reviewer/PC member AAAI 2022-2023, ACL Rolling Review (2021-2022), WWW 2022, EAAI 2022, KDD 2023
Conference Volunteer ICDM 2019
Honors and Awards
2020 Merit Student, UCAS
2017-2023 Academic Scholarship, UCAS
2014-2015 National Undergraduate Scholarship, Ministry of Education of China