The persistent development of Cloud computing attracts individuals and organisations to change their IT strategies. According to this development and the incremental demand of using Cloud computing, Cloud providers continuously update the Cloud infrastructure to fit the incremental demands. Recently, accelerator units, such as Graphics Processing Units (GPUs) have been introduced in Cloud computing. This updated existence leads to provide an increase in hardware heterogeneity in the Cloud infrastructure. With the increase in hardware heterogeneity, new issues will appear. For instance, managing the heterogeneous Cloud infrastructure while maintaining the Quality of Service (QoS) and minimising the infrastructure operational costs will be a substantial issue. Thus, new management techniques need to be developed to manage the updated Cloud infrastructure efficiently. In this paper, we propose a systematic architecture to manage heterogeneous GPUs in a Cloud environment considering the performance and the energy consumption as key factors. Moreover, we develop a Heterogeneous GPUs analyser as the first step in the implementation of the proposed architecture. It aims to quantitatively compare and analyse the behaviour of two different GPUs architectures, NVIDIA Fermi and Kepler, in terms of performance, power and energy consumption. The experimental results show that adequate blocks and threads per block numbers allocations lead to 13.1% energy saving in Fermi GPU and 11.2% more energy efficient in Kepler GPU.
This article was released by Electronic Notes in Theoretical Computer Science.
Find the paper here: www.sciencedirect.com/science/article/pii/S1571066118300562