Write a Blog >>
Tue 10 Nov 2020 01:30 - 01:32 at Virtual room 1 - ML Testing 1

Inspired by the great success of using code coverage as guidance in software testing, a lot of neural network coverage criteria have been proposed to guide testing of neural network models (e.g., model accuracy under adversarial attacks). However, while the monotonic relation between code coverage and software quality has been supported by many seminal studies in software engineering, it remains largely unclear whether similar monotonicity exists between neural network model coverage and model quality. This paper sets out to answer this question. Specifically, this paper studies the correlation between DNN model quality and coverage criteria, effects of coverage guided adversarial example generation compared with gradient decent based methods, effectiveness of coverage based retraining compared with existing adversarial training, and the internal relationships among coverage criteria.

Tue 10 Nov

Displayed time zone: (UTC) Coordinated Universal Time change

01:30 - 02:00
01:30
2m
Talk
Correlations between Deep Neural Network Model Coverage Criteria and Model Quality
Research Papers
Shenao Yan Rutgers University, USA, Guanhong Tao Purdue University, USA, Xuwei Liu Purdue University, USA, Juan Zhai Rutgers University, USA, Shiqing Ma Rutgers University, USA, Lei Xu Nanjing University, China, Xiangyu Zhang Purdue University
DOI
01:33
1m
Talk
Deep Learning Library Testing via Effective Model GenerationACM SIGSOFT Distinguished Paper Award
Research Papers
Zan Wang Tianjin University, China, Ming Yan Tianjin University, China, Junjie Chen Tianjin University, China, Shuang Liu Tianjin University, China, Dongdi Zhang Tianjin University, China
DOI
01:35
1m
Talk
Detecting Numerical Bugs in Neural Network ArchitecturesACM SIGSOFT Distinguished Paper Award
Research Papers
Yuhao Zhang Peking University, Luyao Ren Peking University, China, Liqian Chen National University of Defense Technology, China, Yingfei Xiong Peking University, Shing-Chi Cheung Hong Kong University of Science and Technology, China, Tao Xie Peking University
DOI
01:37
1m
Talk
Dynamic Slicing for Deep Neural Networks
Research Papers
Ziqi Zhang Peking University, China, Yuanchun Li Microsoft Research, China, Yao Guo Peking University, Xiangqun Chen Peking University, Yunxin Liu Microsoft Research, China
DOI
01:39
1m
Talk
Grammar Based Directed Testing of Machine Learning Systems
Journal First
Sakshi Udeshi Singapore University of Technology and Design, Sudipta Chattopadhyay Singapore University of Technology and Design
01:41
1m
Talk
Is Neuron Coverage a Meaningful Measure for Testing Deep Neural Networks?
Research Papers
Fabrice Harel-Canada University of California at Los Angeles, USA, Lingxiao Wang University of California at Los Angeles, USA, Muhammad Ali Gulzar University of California at Los Angeles, USA, Quanquan Gu University of California at Los Angeles, USA, Miryung Kim University of California at Los Angeles, USA
DOI
01:43
1m
Talk
Operational Calibration: Debugging Confidence Errors for DNNs in the Field
Research Papers
Zenan Li Nanjing University, China, Xiaoxing Ma Nanjing University, China, Chang Xu Nanjing University, China, Jingwei Xu Nanjing University, China, Chun Cao Nanjing University, China, Jian Lu Nanjing University, China
DOI
01:45
15m
Talk
Conversations on ML Testing 1
Research Papers
Fabrice Harel-Canada University of California at Los Angeles, USA, Ming Yan Tianjin University, China, Sakshi Udeshi Singapore University of Technology and Design, Shenao Yan Rutgers University, USA, Yuhao Zhang Peking University, Zenan Li Nanjing University, China, Ziqi Zhang Peking University, China, M: Hamid Bagheri University of Nebraska-Lincoln, USA