Available from Summer 2023: State-of-the-Art HPC-Cluster "Marvin"

A big tier-3 HPC-cluster was purchased by the University of Bonn in October 2022. We are happy to announce that Megware will deliver a real leading edge high performance system in early Q2 2023. Check out the technical details below. We are planning to open Marvin to the users at the Uni Bonn (and cooperation partners) starting from summer 2023; the users will be requested to go through a low-threshold application procedure, comparable to the one we currently use for bonna already.

The University of Bonn plans to document the works leading to the installation and opening of Marvin; please check here later for links to the photos.


Technical details:

MPP partition:

  • 192 MPP-nodes
  • # sockets / node = 2
  • # cores / socket = 48
  • Number of cores in total: 18432 

  • Processor: Intel Xeon "Sapphire Rapids", 2.1 GHz
  • RAM / node = 1024 GB DDR5 4800 MHz
  • RAM / core = 10.67 GB
  • Local SSDs: 1x1.92 TB SSD U.3 NVMe

Large Memory Nodes:

  • 24 large-memory nodes
  • # sockets / node = 2
  • # cores / socket = 48
  • Processor: Intel Xeon "Sapphire Rapids", 2.1 GHz
  • RAM / node = 2048 GB DDR5 4800 MHz
  • RAM / core = 21.33 GB
  • Local SSDs: 1x1.92 TB SSD U.3 NVMe

Very Large Memory Nodes:

  • 5 nodes 
  • # sockets / node = 2
  • # cores / socket = 48
  • Processor: Intel Xeon "Sapphire Rapids", 2.1 GHz
  • RAM / node = 4096 GB DDR5 4800 MHz
  • RAM / core = 42.67 GB
  • Local SSDs: 1x3.84 TB SSD U.3 NVMe

GPU partition for highly scalable GPU applications:

  • 32 nodes
  • # sockets / node = 2
  • # cores / socket =64
  • Processor: AMD EPYC "Milan" , 2.0 GHz
  • RAM / node: 1024 GB DDR4 3200 MHz 
  • # GPUS / node = 4
  • GPUs: NVidia A100 80GB (connected via NVLINK within the node)

GPU partition for Machine Learning:

  • 24 nodes
  • # sockets / node = 2
  • # cores / socket =64
  • Processor: AMD EPYC "Milan" , 2.0 GHz
  • RAM / node: 512 GB DDR4 3200 MHz 
  • # GPUS / node = 8
  • GPUs: NVidia A40 48GB

High Performance File System:

  • FS Type: Lustre 
  • 5.6 PB for User Data

HPC Network Technology:

  • Mellanox Inifiniband NDR 200GB/s

(... and of course HOME (with Backup), Login-Servers, Service-Nodes,...)

(Cool, he?)

Megware Logo
© Megware

Contact

For all technical questions / support requests, please contact:

support@hpc.uni-bonn.de

Personal contacts

Avatar Barbi

Dirk Barbi

0228/73-66136

Avatar Kuckertz

Michael Kuckertz

+49 228 73-66125

Avatar Steiner

Jan Steiner

+49 228 367649-83


Wird geladen