You are here

High Performance Computing

What is HPC and why should I use it?

Many researchers use computers but desktop machines only go so far. If your overnight compute jobs run into the next day, if your research waits for a weekend to run, if your computer is limiting the progress of your research, then high performance computing (HPC) is the solution.

High performance computing is used to solve real-world problems of significant scale or detail across a diverse range of disciplines including physics, biology, chemistry, geosciences, climate sciences, engineering and many others.

Intersect encourages researchers who are interested in using HPC to contact us for advice and support at hpc_support@intersect.org.au

Intersect’s HPC facilities

Intersect has a partner share in the peak facilities at the National Computational Infrastructure, based at the Australian National University. In addition to this, Intersect also manages a state facility, Orange hosted at IC2 in Sydney. 

Orange

Intersect's new HPC 'Orange' was commissioned in March 2013. The SGI 30+ Tflop distributed memory cluster provides a greater than 25-fold increase of compute power and a fivefold increase of disk capacity on the existing system. The new system features 100 cluster nodes with 1600 cores powered by the Intel® Xeon® E5-2600 processor series. It also includes 200 TB local scratch disk space, 101TB of usable shared storage delivering 25TFlops, more details here 

At NCI

NCI's new peak system 'Raijin' is a Fujitsu Primergy high-performance, distributed-memory, cluster based on Intel Sandy Bridge 8-core processors (2.6 GHz) comprising:

  • 57,472 cores in the compute nodes;
  • approximately 160 TBytes of main memory;
  • Infiniband FDR interconnect; and
  • approximately 10 PBytes of usable fast file system (for short-term scratch space).

This provides a:

  • peak performance of approximately 1.2 PFlops -- 8.6 times that of the current peak system;
  • sustained performance of approximately 6.7 times the current peak system (i.e., an aggregate SPECfp_rate2006 of 1.6M); and
  • file system performance which is 6.0 times that of the current system; and consistent with NCI's requirements that the system be well-balanced.

The unit of shared memory parallelism is the node, which comprises dual 8-core processors, i.e., 16 cores.  The memory specification across the nodes is heterogeneous in order to provide a configuration capable of accommodating the requirements of most applications, and providing also for large-memory jobs. Accordingly:

  • two-thirds of the nodes have 32 GBytes, i.e., 2 GBytes/core;
  • almost one-third of the nodes have 64 GBytes, i.e., 4 GBytes/core; while
  • two (2) per cent of the nodes have 128 GBytes, i.e., 8 GBytes.core.

 

Attribution Policy

If you use resources on Orange or Raijin via the Intersect partner share we ask that you acknowledge us. The proposed text is:

Computational (and/or storage) resources used in this work were provided by Intersect Australia Ltd. 

The full policy can be found here: http://www.intersect.org.au/attribution-policy