Programming of Parallel Computers, 10 hp, 2013
Aims of the course
Today, most computers are parallel computers. A common laptop has often a dual-core processor with a graphical processing unit (GPU) while a stationary PC can contain two quad-core processors and a very powerful GPU. Morover, it is easy to connect several PCs to a cluster, a powerful parallel computer. At the same time it is more difficult for the programmer to exploit the full capacity of the computer. To use a multi-core computer to its full potential the users are forced to explicitly parallelize their codes over the multiple cores and the GPU.
The aims of the course are to give basic knowledge in parallel computers, algorithms and programming. To give knowledge in fundamental numerical algorithms and software for different parallel computers. To give skills in programming of parallel computers, ranging from dual-core laptops to large clusters of PCs.
Classification of parallel computers. Different forms of memory organisation and program control. Different forms of parallelism. Programming models; programming in a local name space using MPI, programming in global name space using OpenMP and Pthreads, and programming GPU's using CUDA. Data partitioning and load balancing algorithms. Measurements of performance; speedup, efficiency, flops. Parallelization of fundamental algorithms, e.g., in linear algebra and sorting