Write a Blog >>
Tue 10 Nov 2020 01:05 - 01:06 at Virtual room 1 - ML In Practice

Deep learning (DL) has become one of the most successful machine learning techniques. To achieve the optimal development result,
there are emerging requirements on the interoperability between DL frameworks that the trained model files and training/serving programs can be re-utilized. Faithful model conversion is a promising technology to enhance the framework interoperability in which a source model is transformed into the semantic equivalent in another target framework format. However, several major challenges need to be addressed. First, there are apparent discrepancies between DL frameworks. Second, understanding the semantics of a source model could be difficult due to the framework scheme and optimization. Lastly, there exist a large number of DL frameworks, bringing potential significant engineering efforts.

In this paper, we propose MMdnn, an open-sourced, comprehensive, and faithful model conversion tool for popular DL frameworks.
MMdnn adopts a novel unified intermediate representation (IR)-based methodology to systematically handle the conversion challenges. The source model is first transformed into an intermediate computation graph represented by the simple graph-based IR of MMdnn and then to the target framework format, which greatly reduces the engineering complexity. Since the model structure expressed by developers may have been changed by DL frameworks (e.g., graph optimization), MMdnn tries to recover the original high-level neural network layers for better semantic comprehension via a pattern matching similar method. In the meantime, a piece of model construction code is generated to facilitate later retraining or serving. MMdnn implements an extensible conversion architecture from the compilation point of view, which eases contribution from the community to support new DL operators and frameworks. MMdnn has reached good maturity and quality, and is applied for converting production models.

Tue 10 Nov
Times are displayed in time zone: (UTC) Coordinated Universal Time change

01:00 - 01:02
Talk
Research Papers
Zhenpeng ChenPeking University, China, Yanbin CaoPeking University, China, Yuanqiang LiuPeking University, China, Haoyu WangBeijing University of Posts and Telecommunications, Tao XiePeking University, Xuanzhe LiuPeking University, China
DOI Pre-print
01:03 - 01:04
Talk
Industry Papers
pengzi Concordia University, Canada, Jinqiu YangConcordia University, Montreal, Canada, Tse-Hsun (Peter) ChenConcordia University, Lei MaKyushu University
DOI
01:05 - 01:06
Talk
Industry Papers
Yu David LiuSUNY Binghamton, USA, Cheng ChenByteDance, China, Ru ZhangMicrosoft Research, Tingting QinMicrosoft Research, China, Xiang JiMicrosoft Research, China, Haoxiang LinMicrosoft Research, Mao YangMicrosoft Research
DOI
01:07 - 01:08
Talk
Industry Papers
Yanjie GaoMicrosoft Research, China, Yu David LiuSUNY Binghamton, USA, Hongyu ZhangUniversity of Newcastle, Australia, lizhengxian Microsoft Research, China, Yonghao ZhuMicrosoft Research, China, Haoxiang LinMicrosoft Research, Mao YangMicrosoft Research
DOI
01:09 - 01:10
Talk
Industry Papers
Alexey SvyatkovskiyMicrosoft, Shao Kun DengMicrosoft Corporation, Shengyu FuMicrosoft, USA, Neel SundaresanMicrosoft Corporation
DOI Pre-print
01:11 - 01:12
Talk
Industry Papers
Jinhan KimKAIST, Jeongil JuHyundai Motor Group, South Korea, Robert FeldtChalmers University of Technology, Sweden, Shin YooKorea Advanced Institute of Science and Technology
DOI Pre-print
01:13 - 01:30
Research Papers
Sidong FengAustralian National University, Australia, Tse-Hsun (Peter) ChenConcordia University, Yanbin CaoPeking University, China, Yanjie GaoMicrosoft Research, China, Zhenpeng ChenPeking University, China, M: Joshua GarciaUniversity of California, Irvine