Ziyuan Zhong

I am currently a 3rd year PhD student at the Department of Computer Science at Columbia University. I am advised by Prof. Baishakhi Ray. Previously, I did my undergrads at Reed College and Columbia University.

Email  /  Google Scholar  /  Github

profile photo

My current research mainly focuses on testing/imprving Autonomous Driving Systems (ADSs) and robustness of deep learning models. In particular, one central theme is how to efficiently identify the safety issues of ADSs before their mass deployment. Previously, I have also worked on fairness of machine learning. Representative papers are highlighted.

Automatic Map Generation for Autonomous Driving System Testing
Yun Tang, Yuan Zhou, Kairui Yang, Ziyuan Zhong, Baishakhi Ray, Yang Liu, Ping Zhang, Junbo Chen
arXiv, 2022  
paper / bibtex

We propose a method that can automatically generate a more concise map from a given complex map to reduce redundant test cases.

Repairing Group-Level Errors for DNNs Using Weighted Regularization
Ziyuan Zhong*, Yuchi Tian*, Conor J Sweeney, Vicente Ordonez, Baishakhi Ray
arXiv, 2022  
paper / code / bibtex

A series of methods based on weighted regularization for repairing group-level errors of DNNs.

Detecting Multi-Sensor Fusion Errors in Advanced Driver-Assistance Systems
Ziyuan Zhong, Zhisheng Hu, Shengjian Guo, Xinyang Zhang, Zhenyu Zhong, Baishakhi Ray
ISSTA, 2022  
paper / code / bibtex

FusED can efficiently identify fusion errors.

A Survey on Scenario-Based Testing for Automated Driving Systems in High-Fidelity Simulation
Ziyuan Zhong, Yun Tang, Yuan Zhou, Vania de Oliveira Neves, Yang Liu, Baishakhi Ray
arXiv, 2021  
paper / bibtex

A Survey on Scenario-Based Testing for Automated Driving Systems in High-Fidelity Simulation.

Neural Network Guided Evolutionary Fuzzing for Finding Traffic Violations of Autonomous Vehicles
Ziyuan Zhong, Gail Kaiser, Baishakhi Ray
Transactions on Software Engineering (TSE), 2022 (to appear)  
paper / code / bibtex

AutoFuzz uses a grammar-based, learning-guided fuzzing technique to efficiently find violations of Autonomous Driving Systems.

Understanding local robustness of deep neural networks under natural variations
Ziyuan Zhong, Yuchi Tian, Baishakhi Ray
FASE, 2021  
paper / code / bibtex

DeepRobust can identify the input images whose small variations may lead to erroneous DNN behaviors.

Testing DNN image classifiers for confusion & bias errors
Yuchi Tian*, Ziyuan Zhong*, Vicente Ordonez, Gail Kaiser, Baishakhi Ray
ICSE, 2020  
paper / code / bibtex

We developed a testing technique, DeepInspect, to automatically detect class-based confusion and bias errors in DNN-driven image classification software.

Metric learning for adversarial robustness
Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, Baishakhi Ray
Neurips, 2019  
paper / code / bibtex

We propose to regularize the representation space under adversarial attack with metric learning to produce more robust classifiers.

Noise-tolerant fair classification
Alexandre Lamy*, Ziyuan Zhong*, Aditya Krishna Menon, Nakul Verma
Neurips, 2019  
paper / code / bibtex

We showed both theoretically and empirically that even under the very general MC learning noise model on the sensitive feature, fairness can still be preserved by scaling the input unfairness tolerance parameter.

Phasepack: A phase retrieval library
Rohan Chandra, Ziyuan Zhong, Justin Hontz, Val McCulloch, Christoph Studer, Tom Goldstein
Asilomar Conference on Signals, Systems, and Computers, 2017  
paper / code / bibtex

PhasePack is a collection of sub-routines for solving classical phase retrieval problems. PhasePack contains implementations of both classical and contemporary phase retrieval routines.


A Tutorial on Fairness in Machine Learning


Reviewer: Neurips2020-2022, ICLR2021-2022, ICML2021-2022, TOSEM (external)

Teaching Assistant: COMS 4771 Machine Learning(18 Summer, 19 Spring), ELEN 4903 Machine Learning(Edx)(18 Spring)

Design and source code from Jon Barron's website