The HPC cluster dirac (named after Paul Dirac) is dedicated to simulation and model calculations.
The dirac cluster is made of the access and control computer dirac-meister, the fileserver dirac-mester and 32 active computing nodes with a resulting number of 1832 CPU cores and total 5 TB of main memory.
For mass storage, the access and control computer contains four internal discs (12 TB) and uses an external fibre channel RAID array made of 12 disks (144 TB).
The computing units are interconnected with GBit ethernet and 10 GBit/s Infiniband with the help of active switches.
Five computing nodes, named dinux10, dinux9, dinux8, dinux7 and dinux6 are operated as universal Linux servers for interactive jobs only. The remaining computing nodes are powerd by the batch system Open Grid Scheduler/Grid Engine.
The hardware and user environment is optimized for parallel calculations.
The cluster was enhanced fourteen times since 2007.
The installed software inventory is working with the operating system and software distribution OpenSUSE Linux 13.1 64 Bit and hosts additional optimizing compilers, mathematical libraries, parallelizing environments and common applications for science.
Everyone with a valid HZB ID may use the cluster without any further restrictions.
A terminal connection with Secure Shell (ssh with Linux/Apple MacOS/Windows 10, additional PuTTY for Windows) to dirac-meister is all you'll need. Due to the batch operating mode, the computing nodes are not reachable with Secure Shell directly.
Assistance for usage