This lesson is still being designed and assembled (Pre-Alpha version)

Introduction to High Performance Computing for Life Scientists

Key Points

Welcome
  • We should all understand and follow the ARCHER2 Code of Conduct to ensure this course is conducted in the best teaching environment.

  • The course will be flexible to best meet the learning needs of the attendees.

  • Feedback is an essential part of our training to allow us to continue to improve and make sure the course is as useful as possible to attendees.

Who we are: Introducing BioExcel, PRACE, and EPCC training
  • To be determined

LECTURE: High-Performance Computing (HPC)
  • Computer clusters enable users to run larger, more complex jobs in a reasonable time

  • HPC systems are state-of-the-art clusters composed of a large number of powerful computers linked by very fast interconnects.

PRACTICAL: Connecting to ARCHER2 and transferring data
  • ARCHER2’s login address is login.archer2.ac.uk.

  • The password policy for ARCHER2 is well documented.

  • There are a number of ways to transfer data to/from ARCHER2.

LECTURE: HPC Architectures [pre-recorded]
  • To be determined

PRACTICAL: Overview of the ARCHER2 system and modules
  • ARCHER2 consists of high performance login nodes, compute nodes, storage systems and interconnect.

  • There is a wide range of software available on ARCHER2.

  • The system is based on standard Linux with command line access.

  • Software is available through modules.

LECTURE: Batch systems and parallel application launchers [pre-recorded]
  • To be determined

PRACTICAL: Batch Systems and ARCHER2 Slurm Scheduler
  • ARCHER2 uses the Slurm scheduler.

  • srun is used to launch parallel executables in batch job submission scripts.

  • There are a number of different partitions (queues) available.

LECTURE: Parallel Computing
  • To be determined

PRACTICAL: HMMER (1 of 2)
LECTURE: Review of Day 1
  • To be determined

Welcome
  • We should all understand and follow the ARCHER2 Code of Conduct to ensure this course is conducted in the best teaching environment.

  • The course will be flexible to best meet the learning needs of the attendees.

  • Feedback is an essential part of our training to allow us to continue to improve and make sure the course is as useful as possible to attendees.

LECTURE: Measuring Parallel Performance
  • To be determined

PRACTICAL: HMMER (2 of 2)
LECTURE: Computational Building Blocks: Software
  • To be determined

LECTURE: Computational Building Blocks: Hardware
  • To be determined

PRACTICAL: Benchmarking Molecular Dynamics Performance Using GROMACS 1
  • Larger systems scale better to large core-/node-counts than smaller systems.

LECTURE: Parallel Models to Exploit Parallelism
  • To be determined

PRACTICAL: Benchmarking Molecular Dynamics Using GROMACS 2
  • Hybrid MPI with OpenMP does affect performance.

  • When running hybrid jobs, placement across NUMA regions is important.

LECTURE: Building and compiling software: from algorithm to executable
  • To be determined

LECTURE: Review of Day 2
  • To be determined

Welcome
  • We should all understand and follow the ARCHER2 Code of Conduct to ensure this course is conducted in the best teaching environment.

  • The course will be flexible to best meet the learning needs of the attendees.

  • Feedback is an essential part of our training to allow us to continue to improve and make sure the course is as useful as possible to attendees.

PRACTICAL: Benchmarking Molecular Dynamics Using GROMACS 3
LECTURE: Pipelines and Workflows
  • To be determined

QM/MM Simulations Using CP2K
  • To be determined

LECTURE: The future of HPC
  • To be determined

QM/MM Simulations Using CP2K
  • To be determined

LECTURE: The HPC landcsape in the EU and UK
  • To be determined

LECTURE: Course review and where next?
  • To be determined