A Comprehensive Study on Challenges in Deploying Deep Learning Based Software
Deep learning (DL) becomes increasingly pervasive, being used in a wide range of software applications. These software applications, named as DL based software (in short as DL software), integrate DL models trained using a large data corpus with DL programs written based on DL frameworks such as TensorFlow and Keras. A DL program encodes the network structure of a desirable DL model and the process by which the model is trained using the training data. To help developers of DL software meet the new challenges posed by DL, enormous research efforts in software engineering have been devoted. Existing studies focus on the development of DL software and extensively analyze faults in DL programs. However, the deployment of DL software has not been comprehensively studied. To fill this knowledge gap, this paper presents a comprehensive study on understanding challenges in deploying DL software. We mine and analyze 3,023 relevant posts from Stack Overflow, a popular Q&A website for developers, and show the increasing popularity and high difficulty of DL software deployment among developers. We build a taxonomy of specific challenges encountered by developers in the process of DL software deployment through manual inspection of 769 sampled posts and report a series of actionable implications for researchers, developers, and DL framework vendors.
Tue 10 NovDisplayed time zone: (UTC) Coordinated Universal Time change
01:00 - 01:30 | |||
01:00 2mTalk | A Comprehensive Study on Challenges in Deploying Deep Learning Based Software Research Papers Zhenpeng Chen Peking University, China, Yanbin Cao Peking University, China, Yuanqiang Liu Peking University, China, Haoyu Wang Beijing University of Posts and Telecommunications, Tao Xie Peking University, Xuanzhe Liu Peking University, China DOI Pre-print | ||
01:03 1mTalk | A First Look at the Integration of Machine Learning Models in Complex Autonomous Driving Systems: A Case Study on Apollo Industry Papers pengzi Concordia University, Canada, Jinqiu Yang Concordia University, Montreal, Canada, Tse-Hsun (Peter) Chen Concordia University, Lei Ma Kyushu University DOI | ||
01:05 1mTalk | Enhancing the Interoperability between Deep Learning Frameworks by Model Conversion Industry Papers Yu David Liu SUNY Binghamton, USA, Cheng Chen ByteDance, China, Ru Zhang Microsoft Research, Tingting Qin Microsoft Research, China, Xiang Ji Microsoft Research, China, Haoxiang Lin Microsoft Research, Mao Yang Microsoft Research DOI Pre-print | ||
01:07 1mTalk | Estimating GPU Memory Consumption of Deep Learning Models Industry Papers Yanjie Gao Microsoft Research, China, Yu David Liu SUNY Binghamton, USA, Hongyu Zhang University of Newcastle, Australia, lizhengxian Microsoft Research, China, Yonghao Zhu Microsoft Research, China, Haoxiang Lin Microsoft Research, Mao Yang Microsoft Research DOI Pre-print | ||
01:09 1mTalk | IntelliCode Compose: Code Generation using Transformer Industry Papers Alexey Svyatkovskiy Microsoft, Shao Kun Deng Microsoft Corporation, Shengyu Fu Microsoft, USA, Neel Sundaresan Microsoft Corporation DOI Pre-print | ||
01:11 1mTalk | Reducing DNN Labelling Cost using Surprise Adequacy: An Industrial Case Study for Autonomous Driving Industry Papers Jinhan Kim KAIST, Jeongil Ju Hyundai Motor Group, South Korea, Robert Feldt Chalmers University of Technology, Sweden, Shin Yoo Korea Advanced Institute of Science and Technology DOI Pre-print | ||
01:13 17m | Conversations on ML In Practice Research Papers Sidong Feng Australian National University, Australia, Tse-Hsun (Peter) Chen Concordia University, Yanbin Cao Peking University, China, Yanjie Gao Microsoft Research, China, Zhenpeng Chen Peking University, China, M: Joshua Garcia University of California, Irvine |