令和4年度 筑波大学理工情報生命学術院共通専門基盤科目
「High Performance Parallel Computing Technology for Computational Sciences」(0AH0209)
「Japan-Korea HPC Winter School 2022」

Course Overview

High performance computing is the basic technology needed to support today's large scale scientific simulations. It covers a wide variety of issues on hardware and software for high-end computing such as high speed computation, high speed networking, large scale memory and disk storage, high speed numerical algorithms, programming schemes and the system softwares to support them. Current advanced supercomputer systems are based on large scale parallel processing systems. Nowadays, even application users are required to understand these technologies to a certain level for their effective utilization. In this class, we focus on the basic technology of high-end computing systems, programming, algorithm and performance tuning for application users who aim to use these systems for their practical simulation and computing.


Lecture Day and Location

Lecture Day: February 20 (Mon), 21 (Tue), 2023
Location: Hybrid form
Onsite: International Workshop Room, Center for Computational Sciences (Access)
Online: Zoom (Zoom link will be sent by email.)
Notice: This intensive course will also be held as Japan-Korea HPC Winter School 2022.

Schedule (Click on the lecture name to view the lecture materials)

 Feb. 20 (Mon)Feb. 21 (Tue)
09:00 - 10:30Fundamentals of HPC and Parallel ProcessingParallel Programming 2: OpenMP
10:45 - 12:15Parallel Processing SystemsParallel Numerical Algorithm 1
13:30 - 15:00Parallel Programming 1: MPIParallel Numerical Algorithm 2
15:15 - 16:45Optimization 1: Communication Optimization
Cygnus supercomputer
Optimization 2: Computation Optimization

Contents (Click on the lecture name to view the lecture materials)

Lecture nameContentsInstructor
1 Fundamentals of HPC and Parallel Processing Amdahl's law, Parallelization methods (EP, Data parallelism, Pipeline parallelism), Communication, Synchronization, Parallelization efficiency, Load balance. Taisuke Boku
2 Parallel Processing Systems Parallel processing systems (SMP, NUMA, Cluster, Grid, etc.), Memory hierarchy, Memory bandwidth, Network, Communication bandwidth, Delay. Ryohei Kobayashi
3 Parallel Programming 1: MPI Parallel programming language MPI. Norihisa Fujita
4 Optimization 1: Communication Optimization
Cygnus supercomputer
Optimization techniques and performance evaluation of parallel programming on parallel processing systems. Osamu Tatebe
5 Parallel Programming 2: OpenMP Parallel programming model, parallel programming language OpenMP. Akira Nukada
6 Parallel Numerical Algorithm 1 Krylov subspace iterative methods and their parallelization methods. Hiroto Tadano
7 Parallel Numerical Algorithm 2 Fast Fourier Transformation (FFT) and its parallelization methods. Daisuke Takahashi
8 Optimization 2: Computation Optimization Program optimization techniques (Register blocking, Cache blocking, Memory allocation, etc.) and performance evaluation on a compute node of parallel processing systems. Daisuke Takahashi