Designing and developing software with an efficient execution on distributed environment with fairly standard homogeneous processing nodes is already a difficult exercise. This complexity explodes when targeting a heterogeneous environment composed not only of distributed multi-core CPU nodes but also including accelerators with many-core CPUs, GPUs and FPGAs.
One of the main objectives of the TANGO Project is to be able to optimize energy usage of applications in a HPC environment, including the prospect of handling heterogeneity such as CPUs, GPUs and FPGAs.
A key start to optimisation is the ability to monitor such infrastructures and to ensure the data obtained is accurate and consistent. The accuracy of this data is particularly useful when constructing power models that can be used to estimate future power consumption based upon expected utilisation. In this blog we provide key recommendations and findings from performing calibration on our infrastructure.
When designing the architecture of the TANGO project we saw the necessity of a central component that helps to speed up the building process for different targeted heterogeneous architectures with different optimizations and deploy them if the different testbeds, especially in HPC use case, this ended up in the creation of Application Lifecycle Deployment Engine or ALDE. Let’s check step by step how will be shaped and which functionality and steps it is going to perform.
We provide you with tools for developing and operating applications for heterogeneous hardware that are energy and performance aware
Our team has been working hard during the last months to release the first version of the Toolbox and establishing an initiative to strengthen the market using heterogeneous hardware and software. The most relevant things during these last months are:
One of the main objectives of the TANGO Project is to be able to optimize energy usage of applications in an heterogeneous environment –where by heterogeneous we are understating a mixture of different processor devices, such as CPUs, GPUs, FPGAs, DSPs, and so on.- One of the main challenges is to be able to monitor energy usage of those devices without the necessity of intrusive measurements, such as adding over the top physical probes.
During the last decade the software and computing industry has lived a revolutionary change that deeply impacts on how developers embark on delivering software. The mainstream computing landscape has shifted from single–processor machines, to a multi-core, multi-type machines. This “multi-everything” trend has changed all types of computers, devices (phones, tablets, IoT devices and sensors) plus the option of also using remote cloud infrastructures, also providing heterogeneity.
The current computing ecosystem is becoming more and more heterogeneous. On the one hand, trends in computer architectures focuses on providing different computing devices (CPUs, GPUs and FPGAs) and memories in a single chip or computing node, with the aim of providing better computing devices for the different types of algorithms and applications.
“In the twilight of Moore’s Law, the transitions to multicore processors, GPU computing, and HaaS cloud computing are not separate trends, but aspects of a single trend – mainstream computers from desktops to ‘smartphones’ are being permanently transformed into heterogeneous supercomputer clusters. Henceforth, a single compute-intensive application will need to harness different kinds of cores, in immense numbers, to get its job done.
The free lunch is over. Now welcome to the hardware jungle.”