High-Performance-Computing-Cluster dirac

Purpose

The HPC cluster dirac (named after Paul Dirac) is dedicated to simulation and model calculations.

Configuration

The dirac cluster is made of the access and control computer dirac-meister, the fileserver dirac-mester and 32 active computing nodes with a resulting number of 1832 CPU cores and total 4.5 TB of main memory.
For mass storage, the access and control computer contains four internal discs (12 TB) and uses an external fibre channel RAID array made of 12 disks (48 TB).
The computing units are interconnected with GBit ethernet and 10 GBit/s Infiniband with the help of active switches.

Four computing nodes, named dinux7, dinux6dinux5 and dinux4 are operated as universal Linux servers for interactive jobs only. The remaining computing nodes are powerd by the batch system Open Grid Scheduler/Grid Engine.

The hardware and user environment is optimized for parallel calculations.

Status

The cluster was enhanced twelve times since 2007.

Software

The installed software inventory is working with the operating system and software distribution OpenSUSE Linux 13.1 64 Bit and hosts additional optimizing compilers, mathematical libraries, parallelizing environments and common applications for science.

Availability

Everyone with a valid HZB ID may use the cluster without any further restrictions.

A terminal connection with Secure Shell (ssh with Linux/Unix/Apple OS X, PuTTY with Windows) to dirac-meister is all you'll need. Due to the batch operating mode, the computing nodes are not reachable with Secure Shell directly.

Assistance for usage

We love to ask your questions. Please read the usage hints and rules first. You may also write an e-mail to our central help desk with his address help@helmholtz-berlin.de.