High performance computing is the basic technology needed to support today's large scale scientific simulations. It covers a wide variety of issues on hardware and software for high-end computing such as high speed computation, high speed networking, large scale memory and disk storage, high speed numerical algorithms, programming schemes and the system softwares to support them. Current advanced supercomputer systems are based on large scale parallel processing systems. Nowadays, even application users are required to understand these technologies to a certain level for their effective utilization. In this class, we focus on the basic technology of high-end computing systems, programming, algorithm and performance tuning for application users who aim to use these systems for their practical simulation and computing.
Lecture Day: | February 3 (Mon) - 4 (Tue), 2020 |
---|---|
Location: | Center for Computational Sciences(計算科学研究センター) (Access) |
Room: | 2/3 (Mon): Meeting Room A(会議室A) |
2/4 (Tue): International Workshop Room(国際ワークショップ室) |
TWINS registration is available from January 22 (Wed) through February 2 (Sun).
Feb. 3 (Mon) | Feb. 4 (Tue) | |
09:00 - 10:30 | Fundamentals of HPC and Parallel Processing | Optimization 1: Computation Optimization |
10:45 - 12:15 | Parallel Processing Systems | Optimization 2: Communication Optimization |
13:30 - 15:00 | Parallel Programming 1: OpenMP | Parallel Numerical Algorithm 1 |
15:15 - 16:45 | Parallel Programming 2: MPI | Parallel Numerical Algorithm 2 |
Lecture name | Contents | Instructor | |
---|---|---|---|
1 | Fundamentals of HPC and Parallel Processing | Amdahl's law, Parallelization methods (EP, Data parallelism, Pipeline parallelism), Communication, Synchronization, Parallelization efficiency, Load balance. | Taisuke Boku |
2 | Parallel Processing Systems | Parallel processing systems (SMP, NUMA, Cluster, Grid, etc.), Memory hierarchy, Memory bandwidth, Network, Communication bandwidth, Delay. | Taisuke Boku |
3 | Parallel Programming 1: OpenMP | Parallel programming model, parallel programming language OpenMP. | Jinpil Lee (RIKEN R-CCS) |
4 | Parallel Programming 2: MPI | Parallel programming language MPI2. | Claus Aranha |
5 | Optimization 1: Computation Optimization | Program optimization techniques (Register blocking, Cache blocking, Memory allocation, etc.) and performance evaluation on a compute node of parallel processing systems. | Daisuke Takahashi |
6 | Optimization 2: Communication Optimization | Optimization techniques and performance evaluation of parallel programming on parallel processing systems. | Osamu Tatebe |
7 | Parallel Numerical Algorithm 1 | Fast Fourier Transformation (FFT) and its parallelization methods. | Daisuke Takahashi |
8 | Parallel Numerical Algorithm 2 | Krylov subspace iterative methods and their parallelization methods. | Hiroto Tadano |