Prof. Chen Qiao
Xi’an Jiaotong University
Chen Qiao received the B.S. degree in computational mathematics in 1999, the M.S. degree in applied mathematics in 2002, and the Ph.D. degree in applied mathematics in 2009, all from Xi’an Jiaotong University, China. In 2014-2015, she was a postdoctoral researcher in the Department of Biomedical Engineering, Tulane University, USA. From November 2019 to December 2019, she was a research fellow in the School of Computer Science and Engineering, Nanyang Technological University. Currently, she is a full professor of the school of mathematics and statistics, Xi’an Jiaotong University. She serves as the executive director of the Shaanxi Society for Industry and Applied Mathematics, and the director of the brain science laboratory of Xi’an Jiaotong University SuZhou Academy. She has published one monograph and more than 40 academic papers on machine learning, deep learning, neural networks and brain science. Her research is funded by the National Natural Science Foundation of China, National Key Research and Development Program of China, Science and Technology Innovation Plan of Xi'an and several enterprise cooperation projects. Her current research interests include the mathematical foundation of information technology, artificial intelligence and neuroimaging.
Speech Title:"Towards Building Deep Learning Models with Explainability and their Applications in Neuroimaging"
Abstract: Although many deep learning models are performance driven, i.e., accuracy-oriented, their explainability is more important. Building explainable deep learning models can help us to understand which input features are more important for a given task, since often we not only want to be able to make a prediction, but rather we want to understand the basis for making such a prediction. The explainability of deep learning is especially crucial in studying neuroimaging, where we are often interested in identifying biomarkers underlying brain development or disorder. In this talk, I will introduce some new models on explainable deep learning proposed by our group, and show how these models are utilized in dynamic brain functional connectivity analysis to identify the differences within and between functional brain networks over time scales during development.
More speakers are updating.