The centrally-funded BlueBEAR2 cluster, which is available free of charge to all research groups in the University, is based on IBM's iDataPlex servers and consists of:
- 2 dual-processor 8-core (16 cores/node) 64-bit 2.2 GHz Intel Sandy Bridge E5-2660 login nodes with 64 GB of memory logon nodes in a round-robin configuration for resiliency. Each login node also has an nVIDIA GT218 [NVS 300] GPU for developing GPU programs to be run on the GPGPU service.
- 1 dual-processor 8-core (16 cores/node) 64-bit 2.2 GHz Intel Sandy Bridge E5-2660 login node with 64 GB of memory for applications that make use of a Graphical user Interface (GUI). Like the other login nodes, this also has an nVIDIA NVS300 GPU.
- 72 dual-processor 8-core (16 cores/node) 64-bit 2.2 GHz Intel Sandy Bridge E5-2660 worker nodes with 32 GB of memory giving a total of 1152 cores
- 2 dual-processor 8-core (16 cores/node) 64-bit 2.2 GHz Intel Sandy Bridge E5-2660 worker nodes with 256 GB of memory forming a large memory (SMP) service
- 2 GPU-assisted compute nodes with 2 two 8 core 64-bit 2.2 GHz Intel Sandy Bridge E5-2660 processors, 32 GB of memory and an nVIDIA Kepler-based Tesla K20 GPU forming a GPGPU service
- over 150 TB (raw) disk space primarily allocated to BlueBEAR users using IBM's GPFS cluster file system
The theoretical peak performance of the centrally-funded compute nodes is 1216 (cores) * 2.2 (GHz) * 8 (floating point operations/cycle) = 21.4 TFlop/s
In addition to the centrally-funded cluster there are 28 nodes (432 cores, with a theoretical peak performance of 7.9 TFlop/s) that have been funded by research groups and are hosted in the BEAR cluster. This is an additional resource that can be made available to other BlueBEAR users by arrangement, and providing that their work is appropriate for the constraints involved in running on these researcher-owned nodes. Please open a call with the IT Service Desk to find out more about submitting jobs to these nodes.
The interconnect is FDR-10 Infiniband, which carries both GPFS and MPI traffic.
The operating system is Scientific Linux 6.6
If you are staff or a research postgraduate, then please review the left hand navigation panel to get more information about the applications we provide and how to submit these jobs and see the BlueBEAR Registration information for help on gaining access to the cluster. The User Guidelines discuss the initial resources that are allocated to new users and how to apply for additional resources such as the ability to run jobs in excess of the default walltime limit and additional disk space.
Any problems or questions should be logged through the IT Service Desk Portal at http://www.itservicedesk.bham.ac.uk using the "Make a Request" option and selecting "other BEAR Request" from the Research Computing Service choices. This ensures that they reach the most appropriate person and are tracked. This also allow any common areas of concern to be identified and addressed as well as addressing the individual support requests.
For more general information or comments: please contact us on firstname.lastname@example.org
If you have used BlueBEAR to help you with your research and you have been published, please can you fill in this form which will help us to publicise your work and greatly assist us in making the case to the University for ongoing HPC resources. We also appreciate acknowledgement of use of this service in any publications. An appropriate wording would be:
The computations described in this paper were performed using the University of Birmingham's BlueBEAR HPC service, which provides a High Performance Computing service to the University's research community. See http://www.birmingham.ac.uk/bear for more details.
Further documentation on how to use and administer BlueBEAR can be found in these How To pages.