Many researchers use computers but desktop machines only go so far. If your overnight compute jobs run into the next day, if your research waits for a weekend to run, if your computer is limiting the progress of your research, then high performance computing (HPC) is the solution.
High performance computing is used to solve real-world problems of significant scale or detail across a diverse range of disciplines including physics, biology, chemistry, geosciences, climate sciences, engineering and many others.
Media Release (PDF)
Please email firstname.lastname@example.org for questions, advice and support.
Intersect’s HPC facilities
Intersect manages the NSW state facility, Orange, hosted at IC2 in Sydney. In addition to this, Intersect has a partner share in the peak facilities at the National Computational Infrastructure, based at the Australian National University.
Intersect's 'Orange' was commissioned in March 2013. The system features 103 cluster nodes with 1,660 cores powered by the Intel® Xeon® E5-2600 processor series. It includes 200TB local scratch disk space and 101TB of usable shared storage (56TB disk space in a Panasas® PAS-12 global parallel file system and 45TB disk storage in a SGI NAS Storage Server). Orange delivers 30+ TFlops peak performance. The interconnects are QDR Infiniband, more details here.
Raijin At NCI
Intersect is a partner and shares high performance computing facilities at the National Computational Infrastructure, based at the Australian National University. NCI's peak system 'Raijin' is a Fujitsu PRIMERGY cluster based on Intel Sandy Bridge 8-core processors (2.6 GHz) comprising: 57,472 cores in the compute nodes, approximately 160 TBytes of main memory, and approximately 10 PBytes of usable fast file system, more details here.
NeCTAR Research Cloud
Over 4,500 local and 32,000 distributed computing cores running x86 OpenStack hypervisors tuned to the needs of research.
Create multiple virtual machines with up to 16 virtual CPUs. Features Linux operating system flavours including: Centos, Ubuntu, Fedora and Scientific Linux. Researchers can directly access eight national network nodes for additional scale or data proximity.
There’s no such thing as an ‘average’ researcher when it comes to intensity, appetite, flavour and volume of big computing, so no one Time zone fits all. A physicist may need a large cluster of independent nodes with high I/O, a computational linguist may need a large shared memory space, and an astronomer may need massively parallel compute array. Collaboration tools may be the mainstream driver for a social scientist, while an archaeologist needs geocoding. Intersect people are flexible and ready to help solve individual, team, and organisational compute challenges.
In most Time zones demand exceeds supply because subsidised merit schemes apply. Larger proposals for significant quantities of Time are requested through an annual merit-based formal process. However, new Time travellers are actively sought, especially researchers from smaller institutions, non-traditional HPC disciplines, and research students. Intersect routinely and frequently accepts small-scale "experimental" proposals at any time.
Intersect runs a merit based Resource Allocation Round every calendar year where researchers from member institutions apply for large allocations for both Orange and Raijin. These applications are reviewed for comparative research merit by the independent Resource Allocation Committee as well as Intersect HPC experts. However you can apply for small amounts of compute at any time. Learn how to book your Time at intersect.org.au/time/merit
If you use merit-allocated resources on Orange or Raijin via the Intersect partner share we request that you acknowledge us. The proposed text is:
Computational (and/or storage) resources used in this work were provided by Intersect Australia Ltd.
The full attribution policy can be found here.