The Computer Service
Groups runs the institute's central mail, print, software, backup and web servers, as well as file servers for the various departments and Max Planck Research Groups, most running the Linux operating system. Backup remains based on IBM Spectrum Protect (formerly Tivoli Storage Manager); currently the total backup data volume approaches 870 TB, 240 TB of which are archived data, the central Storage Area Network capacity is 800 TB. The estimated total number of desktops and data acquisition PCs remains around 750. Of these about 75% run Windows, 20% run Linux, the number of Macs is rising slowly.
The number of High Performance Computing (HPC) nodes rose to 551 and the number of computing cores to 26612 cores with 478 TB accumulated memory. The HPC associated electricity consumption rose to 170 kW. In reaction to this the server rooms 6B13 (infrastructure), 2E2 (High Performance Computing and Networking) were trimmed for energy and cooling efficiency using water cooled racks and reardoors. These installations use the 1.3 MW inhouse process cooling water plant. The achieved temperature spread of 209/25°C permits free (radiative) cooling throughout eight months of the year. During the current energy crisis the IT group was asked to reduce the energy consumption by 25%. This goal can be reached by shutdown of older hardware, energy optimisations on servers and HPC nodes and reorganisations as the opening the General Purpose Gauss cluster for the institute public thus reducing the number of group-internal computational resources.
For the Alavi group a distributed 1.75 PB filesystem accessible from the Stuttgart Campus as well as from the MP/CDF computing and data center in Garching is operational over a dedicated 10 GbE fiber connection. The 100 computing nodes of the Alavi department hosted at the MP/CDF were shut down to save energy as well as half of the compute nodes installed back in 2014 during the foundation of the Alavi group.
The Xen-virtualisation platform for central services was updated and relies now on a CEPH based distributed storage backend. The services can move freely between two locations in the main building and the new High Precision Lab in order to ensure High Availability of the services. The high availability storage is distributed over 3 locations.
A new Firewall and VPN remote access strategy was implemented together with significant changes in the institute's Identity Management (IdM). The groups focus here was to rely on open industry standards whereever possible to avoid customer lock-in. When needed, proprietary systems like Microsoft Active Directory were provisioned with external help from open sources like Open-LDAP in order to permit software and patch roll-out in Windows environments. These measures focus on the protection of the institute data and infrastructure while enabling scientists to access them from abroad. The international collaboration and the inherent non-locality of science make these tasks highly complex.
User training to detect and resist cyber attacks has been intensified. The rules for remote access have been tightened and 2-factor-authentication is under implementation