In the last few months, ten new computing modules have been delivered to ALICE and LHCb as part of their data centres’ upgrades for Runs 3 and 4. These modular data-centres are provided by the Belgian company Automation. They will equip CERN with two modern data-centres, which use indirect free-air cooling and are designed to achieve a very respectable energy efficiency with less than 10% overhead energy consumption.
At ALICE, two modules have been delivered, installed and connected; two additional ones will arrive in July 2019. Each module includes 18 racks, representing a total power of 2.1 MW. These will constitute ALICE’s new processing farm for use during Runs 3 and 4, and will host up to 750 servers with graphic processing units (GPUs).
At LHCb, four modules have already been delivered. Six modules will ultimately be installed, which will together host 132 racks for a total power of more than 2 MW. The two central modules will be home to the readout system for Run 3, comprised of about 500 servers with special readout cards developed by LHCb and used also by ALICE. Over 14 000 optical fibres enter these two modules from the detector. They bring about 40 terabit/s of raw data and are distributed to the readout servers (each module can host more than 1000 servers). The remaining four modules will host the servers of the high-level trigger farm. LHCb will deploy at least 2000 servers at the start of Run 3 and at least 20 PB of storage.
The flexible and cost-efficient implementation of the data-centre modules made it possible to include head-room in terms of rack space and cooling capacity for future expansions of LHCb’s computing infrastructure. During LS2 and Run 3, the modules will be shared with CERN’s IT department in order to make efficient use of the facility. CERN IT has already installed 780 servers, relocated from the Wigner data centre, and has put them into operation.
Watch the video about the upgrade of the LHCb experiment during LS2: