Papers were accepted by ASAP'19, JPDC and ICPP'19
The paper entitled Resilient Neural Network Training for Accelerators with Computing Errors was accepted as a short paper by The 30th IEEE International Conference on Application-specific Systems, Architectures and Processors. ASAP is a CORE-A conference.
Abstract: With the advancements of neural networks, customized accelerators are increasingly adopted in massive AI applications. To gain higher energy efficiency or performance, many hardware design optimizations such as near-threshold
logic or overclocking can be utilized. In these cases, computing
errors may happen and the computing errors are difficult
to be captured by conventional training on general purposed
processors (GPPs). Applying the offline trained neural network
models to the accelerators with errors directly may lead to
considerable prediction accuracy loss.
To address this problem, we explore the resilience of neural
network models and relax the accelerator design constraints to
enable aggressive design options. First of all, we propose to
train the neural network models using the accelerators’ forward
computing results such that the models can learn both the data
and the computing errors. In addition, we observe that some of
the neural network layers are more sensitive to the computing
errors. With this observation, we schedule the most sensitive
layer to the attached GPP to reduce the negative influence of
the computing errors. According to the experiments, the neural
network models obtained from the proposed training outperform
the original models significantly when the CNN accelerators are
affected by computing errors.
The paper entitled Load-balancing Distributed Outer Joins through Operator Decomposition was accepted by Journal of Parallel and Distributed Computing. The whole review process takes about 14 months with three round revisions, JPDC is a CORE-A* journal.
Abstract: High-performance data analytics largely relies on being able to efficiently execute various distributed data operators such as distributed joins. So far, large amounts of join methods have been proposed and evaluated in parallel and distributed environments. However, most of them focus on inner joins, and there is little published work providing the detailed implementations and analysis of outer joins. In this work, we present POPI (Partial Outer join & Partial Inner join), a novel method to load-balance large parallel outer joins by decomposing them into two operations: a large outer join over data that does not present significant skew in the input and an inner join over data presenting significant skew. We present the detailed implementation of our approach and show that POPI is implementable over a variety of architectures and underlying join implementations. Moreover, our experimental evaluation over a distributed memory platform also demonstrates that the proposed method is able to improve outer join performance under varying data skew and present excellent load-balancing properties, compared to current approaches.
The paper entitled FlowCon: Elastic Flow Configuration for Containerized Deep Learning Applications was accepted by The 48th International Conference on Parallel Processing. ICPP is a CORE-A conference, and the acceptance rate is 106/405=26%.
Abstract: An increasing number of companies are using data analytics to improve their products, services, and business processes. However, learning knowledge effectively from massive data sets always involves nontrivial computational resources. Most businesses thus choose to migrate their hardware needs to a remote cluster computing service (e.g., AWS) or to an in-house cluster facility which is often run at its resource capacity. In such scenarios, where jobs compete for available resources utilizing resources effectively to achieve high-performance data analytics becomes desirable. Although cluster resource management is a fruitful research area having made many advances (e.g., YARN, Kubernetes), few projects have investigated how further optimizations can be made specifically for training multiple machine learning (ML) / deep learning (DL) models. In this work, we introduce FlowCon, a system which is able to monitor loss functions of ML/DL jobs at runtime, and thus to make decisions on resource configuration elastically. We present a detailed design and implementation of FlowCon, and conduct intensive experiments over various DL models. Our experimental results show that FlowCon can strongly improve DL job completion time and resource utilization efficiency, compared to existing approaches. Specifically, FlowCon can reduce the completion time by up to 42.06% for a specific job without sacrificing the overall makespan, in the presence of various DL job workloads.