The massive progress of machine learning has seen its application over a variety of domains in the past decade. But how do we develop a systematic, scalable and modular strategy to validate machine-learning systems? We present, to the best of our knowledge, the first approach, which provides a systematic test framework for machine-learning systems that accepts grammar-based inputs. Our OGMA approach automatically discovers erroneous behaviours in classifiers and leverages these erroneous behaviours to improve the respective models. OGMA leverages inherent robustness properties present in any well trained machine-learning model to direct test generation and thus, implementing a scalable test generation methodology. To evaluate our OGMA approach, we have tested it on three real world natural language processing (NLP) classifiers. We have found thousands of erroneous behaviours in these systems. We also compare OGMA with a random test generation approach and observe that OGMA is more effective than such random test generation by up to 489%.
Tue 10 NovDisplayed time zone: (UTC) Coordinated Universal Time change
01:30 - 02:00 | |||
01:30 2mTalk | Correlations between Deep Neural Network Model Coverage Criteria and Model Quality Research Papers Shenao Yan Rutgers University, USA, Guanhong Tao Purdue University, USA, Xuwei Liu Purdue University, USA, Juan Zhai Rutgers University, USA, Shiqing Ma Rutgers University, USA, Lei Xu Nanjing University, China, Xiangyu Zhang Purdue University DOI | ||
01:33 1mTalk | Deep Learning Library Testing via Effective Model GenerationACM SIGSOFT Distinguished Paper Award Research Papers Zan Wang Tianjin University, China, Ming Yan Tianjin University, China, Junjie Chen Tianjin University, China, Shuang Liu Tianjin University, China, Dongdi Zhang Tianjin University, China DOI | ||
01:35 1mTalk | Detecting Numerical Bugs in Neural Network ArchitecturesACM SIGSOFT Distinguished Paper Award Research Papers Yuhao Zhang Peking University, Luyao Ren Peking University, China, Liqian Chen National University of Defense Technology, China, Yingfei Xiong Peking University, Shing-Chi Cheung Hong Kong University of Science and Technology, China, Tao Xie Peking University DOI | ||
01:37 1mTalk | Dynamic Slicing for Deep Neural Networks Research Papers Ziqi Zhang Peking University, China, Yuanchun Li Microsoft Research, China, Yao Guo Peking University, Xiangqun Chen Peking University, Yunxin Liu Microsoft Research, China DOI | ||
01:39 1mTalk | Grammar Based Directed Testing of Machine Learning Systems Journal First Sakshi Udeshi Singapore University of Technology and Design, Sudipta Chattopadhyay Singapore University of Technology and Design | ||
01:41 1mTalk | Is Neuron Coverage a Meaningful Measure for Testing Deep Neural Networks? Research Papers Fabrice Harel-Canada University of California at Los Angeles, USA, Lingxiao Wang University of California at Los Angeles, USA, Muhammad Ali Gulzar University of California at Los Angeles, USA, Quanquan Gu University of California at Los Angeles, USA, Miryung Kim University of California at Los Angeles, USA DOI | ||
01:43 1mTalk | Operational Calibration: Debugging Confidence Errors for DNNs in the Field Research Papers Zenan Li Nanjing University, China, Xiaoxing Ma Nanjing University, China, Chang Xu Nanjing University, China, Jingwei Xu Nanjing University, China, Chun Cao Nanjing University, China, Jian Lu Nanjing University, China DOI | ||
01:45 15mTalk | Conversations on ML Testing 1 Research Papers Fabrice Harel-Canada University of California at Los Angeles, USA, Ming Yan Tianjin University, China, Sakshi Udeshi Singapore University of Technology and Design, Shenao Yan Rutgers University, USA, Yuhao Zhang Peking University, Zenan Li Nanjing University, China, Ziqi Zhang Peking University, China, M: Hamid Bagheri University of Nebraska-Lincoln, USA |