Scientific Publications

Performance and Energy-based Cost Prediction of Virtual Machines Auto-Scaling in Cloud

Virtual Machines (VMs) auto-scaling is an important technique to provision additional resource capacity in a Cloud environment. It allows the VMs to dynamically increase or decrease the amount of resources as needed in order to meet Quality of Service (QoS) requirements. However, the auto-scaling mechanism can be time-consuming to initiate (e.g. in the order of a minute), which is unacceptable for VMs that need to scale up/out during the computation, besides additional costs due to the increase of the energy overhead.

A Holistic Resource Management for Graphics Processing Units in Cloud Computing

The persistent development of Cloud computing attracts individuals and organisations to change their IT strategies. According to this development and the incremental demand of using Cloud computing, Cloud providers continuously update the Cloud infrastructure to fit the incremental demands. Recently, accelerator units, such as Graphics Processing Units (GPUs) have been introduced in Cloud computing. This updated existence leads to provide an increase in hardware heterogeneity in the Cloud infrastructure. With the increase in hardware heterogeneity, new issues will appear.

Executing linear algebra kernels in heterogeneous distributed infrastructures with PyCOMPSs

Python is a popular programming language due to the simplicity of its syntax, while still achieving a good performance even being an interpreted language. The adoption from multiple scientific communities has evolved in the emergence of a large number of libraries and modules, which has helped to put Python on the top of the list of the programming languages [1]. Task-based programming has been proposed in the recent years as an alternative parallel programming model.

Enabling Python to execute efficiently in heterogeneous distributed infrastructures with PyCOMPSs

Python has been adopted as programming language by a large number of scientific communities. Additionally to the easy programming interface, the large number of libraries and modules that have been made available by a large number of contributors, have taken this language to the top of the list of the most popular programming languages in scientific applications. However, one main drawback of Python is the lack of support for concurrency or parallelism.

PaaS-IaaS Inter-Layer Adaptation in an Energy-Aware Cloud Environment

Cloud computing providers resort to a variety of techniques to improve energy consumption at each level of the cloud computing stack. Most of these techniques consider resource-level energy optimisation at IaaS layer. This paper argues energy gains can be obtained by creating a cooperation between the PaaS layer (in charge of hosting the application/service) and the IaaS layer (in charge of handling the computing resources).

Towards an Energy-aware Framework for application development and execution in Heterogeneous parallel architectures

Hardware in HPC environments in recent years has become ever more heterogeneous in order to improve computational performance and as an aspect of managing power and energy constraints. This increase in heterogeneity requires middleware abstractions to eliminate additional complexities that it brings. In this paper we present a self-adaptation framework which includes aspects such as automated configuration, deployment and redeployment of applications to different heterogeneous infrastructure.

Resource Boxing: Converting Realistic Cloud Task Utilization Patterns for Theoretical Scheduling

Scheduling is a core component within distributed systems to determine optimal allocation of tasks within servers. This is challenging within modern Cloud computing systems - comprising millions of tasks executing in thousands of heterogeneous servers. Theoretical scheduling is capable of providing complete and sophisticated algorithms towards a single objective function. However, Cloud computing systems pursue multiple and oftentimes conflicting objectives towards provisioning high levels of performance, availability, reliability and energy-efficiency.

A unified Model for Holistic Power Usage in Cloud Datacenter Servers

Cloud datacenters are compute facilities formed by hundreds and thousands of heterogeneous servers requiring significant power requirements to operate effectively. Servers are composed by multiple interacting sub-systems including applications, microelectronic processors, and cooling which reflect their respective power profiles via different parameters. What is presently unknown is how to accurately model the holistic power usage of the entire server when including all these sub-systems together.