Parallel Programming and Optimization

Data: Al, Api 27 - Ar, Api 28 2015

Ordua: 09:30

Hizlariak: Pedro Valero, BCAM

The increase of resources has been mainly aimed to sequential processors over years. This increase of resources did not suppose to retrain programmers or rewrite programs for emergent computer architectures. Unfortunately, these techniques have met their limits, and further exploitation does not result in a much better performance.

In order to maintain the same rate of growth in performance processor, most hardware companies are designing and developing new parallel architectures. The main problem regarding to these processors is found in the programmability of the applications to be efficiently implemented. This course aims to facilitate the understanding of new and current programming tools to exploit such parallel processors.

RECOMMENDED PREREQUISITES
Most of the parallel programming tools are actually extensions of the most common and extended programming languages, C, C++, Fortran, Python... It is recommendable to have experience in C as it will be our base programming language. It is not necessary to have any background in parallel computing architectures and parallel programing.

If it is possible, we encourage to bring your own laptop to have access to our server, and so, carry out the lab sessions in which you will be able to practice with parallel computers by using the aforementioned programming models.

PROGRAMME
After a brief introduction to parallel computer architectures, such as distributed memory, shared memory and hardware accelerators, the main and most extended parallel programming paradigm are presented (MPI, OpenMP and CUDA).

We present how to compile/implement/lunch/exploit most of the current parallel computing environment, shared memory, distributed memory, hardware accelerators.

In order to make easier the explanation and understanding of parallel programming tools, we focus on simple problems such as vector-vector, matrix-vector and matrix-matrix addition/multiplication.

Monday (27/04/2015)
9:30 - 11:30 Sequential Optimizations-Shared Memory Programming (OpenMP)
11:30 - 13:30 Lab

Tuesday (28/04/2015)
9:30 - 11:30 Distributed Memory Programming (MPI)-GPU Programming (CUDA)
11:30 - 13:30 Lab

 


*Inscription is required: So as to inscribe send an e-mail to pvalero@bcamath.org

Antolatzaileak:

BCAM 

Hizlari baieztatuak:

Pedro Valero, BCAM

Ez da ekiltaldirik aurkitu.