Enhancing the Interoperability between Deep Learning Frameworks by Model Conversion
Deep learning (DL) has become one of the most successful machine learning techniques. To achieve the optimal development result, there are emerging requirements on the interoperability between DL frameworks that the trained model files and training/serving programs can be re-utilized. Faithful model conversion is a promising technology to enhance the framework interoperability in which a source model is transformed into the semantic equivalent in another target framework format. However, several major challenges need to be addressed. First, there are apparent discrepancies between DL frameworks. Second, understanding the semantics of a source model could be difficult due to the framework scheme and optimization. Lastly, there exist a large number of DL frameworks, bringing potential significant engineering efforts.
In this paper, we propose MMdnn, an open-sourced, comprehensive, and faithful model conversion tool for popular DL frameworks. MMdnn adopts a novel unified intermediate representation (IR)-based methodology to systematically handle the conversion challenges. The source model is first transformed into an intermediate computation graph represented by the simple graph-based IR of MMdnn and then to the target framework format, which greatly reduces the engineering complexity. Since the model structure expressed by developers may have been changed by DL frameworks (e.g., graph optimization), MMdnn tries to recover the original high-level neural network layers for better semantic comprehension via a pattern matching similar method. In the meantime, a piece of model construction code is generated to facilitate later retraining or serving. MMdnn implements an extensible conversion architecture from the compilation point of view, which eases contribution from the community to support new DL operators and frameworks. MMdnn has reached good maturity and quality, and is applied for converting production models.
Tue 10 NovDisplayed time zone: (UTC) Coordinated Universal Time change
01:00 - 01:30 | |||
01:00 2mTalk | A Comprehensive Study on Challenges in Deploying Deep Learning Based Software Research Papers Zhenpeng Chen Peking University, China, Yanbin Cao Peking University, China, Yuanqiang Liu Peking University, China, Haoyu Wang Beijing University of Posts and Telecommunications, Tao Xie Peking University, Xuanzhe Liu Peking University, China DOI Pre-print | ||
01:03 1mTalk | A First Look at the Integration of Machine Learning Models in Complex Autonomous Driving Systems: A Case Study on Apollo Industry Papers pengzi Concordia University, Canada, Jinqiu Yang Concordia University, Montreal, Canada, Tse-Hsun (Peter) Chen Concordia University, Lei Ma Kyushu University DOI | ||
01:05 1mTalk | Enhancing the Interoperability between Deep Learning Frameworks by Model Conversion Industry Papers Yu David Liu SUNY Binghamton, USA, Cheng Chen ByteDance, China, Ru Zhang Microsoft Research, Tingting Qin Microsoft Research, China, Xiang Ji Microsoft Research, China, Haoxiang Lin Microsoft Research, Mao Yang Microsoft Research DOI Pre-print | ||
01:07 1mTalk | Estimating GPU Memory Consumption of Deep Learning Models Industry Papers Yanjie Gao Microsoft Research, China, Yu David Liu SUNY Binghamton, USA, Hongyu Zhang University of Newcastle, Australia, lizhengxian Microsoft Research, China, Yonghao Zhu Microsoft Research, China, Haoxiang Lin Microsoft Research, Mao Yang Microsoft Research DOI Pre-print | ||
01:09 1mTalk | IntelliCode Compose: Code Generation using Transformer Industry Papers Alexey Svyatkovskiy Microsoft, Shao Kun Deng Microsoft Corporation, Shengyu Fu Microsoft, USA, Neel Sundaresan Microsoft Corporation DOI Pre-print | ||
01:11 1mTalk | Reducing DNN Labelling Cost using Surprise Adequacy: An Industrial Case Study for Autonomous Driving Industry Papers Jinhan Kim KAIST, Jeongil Ju Hyundai Motor Group, South Korea, Robert Feldt Chalmers University of Technology, Sweden, Shin Yoo Korea Advanced Institute of Science and Technology DOI Pre-print | ||
01:13 17m | Conversations on ML In Practice Research Papers Sidong Feng Australian National University, Australia, Tse-Hsun (Peter) Chen Concordia University, Yanbin Cao Peking University, China, Yanjie Gao Microsoft Research, China, Zhenpeng Chen Peking University, China, M: Joshua Garcia University of California, Irvine |