In November 2018 we announced the imminent arrival at the University of the largest IBM POWER9 AI cluster in the UK. If you are a researcher at the University interested in using this powerful AI resource then please contact us by contacting us. We are particularly looking for people who use TensorFlow, GROMACS, LAMMPS, or other GPU-accelerated software to contact us.
This service consists of two parts:
- Three POWER9 HPC nodes in BlueBEAR that any researcher in the University may apply to access to run GPU-accelerated software. These nodes are in the bbpowergpu QOS.
- Seven POWER9 HPC nodes in the CaStLeS part of BlueBEAR. Access to these nodes is available for Life Sciences researchers. For further information, including how to apply for access see the CaStLeS overview. These nodes are in the castlespowergpu QOS.
Each of our BEAR AI systems has:
- Dual IBM POWER9 CPUs with 18 cores each, which currently present themselves as 144 cores using simultaneous multithreading (SMT4)
- Four NVIDIA Tesla V100, 16GB Tensor Core GPUs
- 1 TB system memory
- High speed NVIDIA NVLink interconnect fully meshed between the GPUs and also into the system memory
- 100Gbps EDR InfiniBand interconnect to other nodes and storage systems
AI Disk Space
If a job uses significant I/O (Input/Output) then files should be created using the AI disk space. This disk space is available on the POWER9 nodes at
/scratch. Information on using this disk space is detailed on the job submission page.
We have installed a range of applications on the BEAR AI nodes, but the software available is usually contrained to applications that can make good use of the available GPUs. The applications that are available will list EL8-power9 as one of the available architectures for a specific version of an application on the BEAR Applications site. If the software you are looking to use is not available on the BEAR AI nodes then please open a Request New BEAR Software to discuss this or see if the software is available on the BlueBEAR GPU nodes.