FlashLite FAQs
What is FlashLite?
What is FlashLite not?
When will FlashLite be available?
How do I apply for, and obtain, access to FlashLite?
How do I get training in how to use FlashLite?
Where will FlashLite be housed?
What is FlashLite for?
Who paid for FlashLite?
Who can use FlashLite?
What are the usage policies?
What’s been bought?
What is a "supernode"?
FlashLite in the media
What is FlashLite?
FlashLite is a high performance computer (HPC) that has been designed explicitly for Australian research to conduct data intensive science and innovation.
FlashLite is optimised for data intensive computation and has:
- 1632 cores
- 34.8 TB of RAM
- 326.4 TB of SSD storage
- 65.28 Tflop/s (Rpeak).
More detail about the internal make up of FlashLite appears below.
What is FlashLite not?
FlashLite is not a general purpose HPC system, nor a replacement for the Barrine HPC.
When will FlashLite be available?
As at March 18, the contract for its purchase has been finalised and the purchase order is being raised.
We anticipate a multi-month delivery period due to the specialised nature of some of the components.
A further acceptance testing period and configuration will also be necessary.
At this time, and not withstanding any major delays, we expect that the system will be in place and operating by mid-2015 and users to be able to login in August/September 2015.
How do I apply for, and obtain, access to FlashLite?
See the FlashLite webpage section 'Getting a FlashLite account'.
How do I get training in how to use FlashLite?
There will be regular new user training sessions conducted once FlashLite is operational from August/September 2015. These will be promoted on the RCC website.
Where will FlashLite be housed?
FlashLite will be physically located at the Dell Data Centre located at Springfield, Brisbane. This will make it close to major data collections in RDSI-funded storage, with networks and data mover tools to get large datasets into, and out of, the data centre.
What is FlashLite for?
FlashLite will support applications that need very high performance secondary memory as well as large amounts of primary (main) memory, and will optimise data movement within the machine.
Data intensive applications are neither well served by traditional supercomputers nor by modern cloud-based data centres. Conventional supercomputers maximise Floating Point Operations per Second (FLOPS) and inter-processor communication rates through high bandwidth and low latency networks.
Conversely, modern cloud systems minimise the cost of ownership through reliance on virtual machines and shared storage, and thus utilise relatively slow processors and networks and do not support large-scale parallel processing.
The proposal for FlashLite identified a number of computational themes:
- Theme 1: Applications that Directly Manipulate Large Amounts of Data
- Large Memory Database Systems
- Machine Learning and Classification
- Theme 2: Applications that Integrate Observation Data and Computation
- Astrophysics
- Cardiac Research
- Coastal Management
- Advanced Materials
- Climate Change
- LIDAR processing
- Theme 3: Applications that Require Large Main Memories
- Genomics
- Theme 4: Applications with Significant Temporary Storage Requirements
- Computational Chemistry.
Who paid for FlashLite?
FlashLite has been funded by the Australian Research Council (LIEF Project #LE140100061) in conjunction with the following stakeholders:
- CSIRO
- Griffith University
- Monash University
- Queensland Cyber Infrastructure Foundation
- Queensland University of Technology
- The University of Queensland
- The University of Technology, Sydney.
Who can use FlashLite?
Users from the stakeholder organisations will need to make a case for needing a machine with the capability of FlashLite.
A portion of FlashLite's capacity will be available to researchers from outside of the stakeholder institutions via the National Computational Infrastructure's (NCI) National Computational Merit Allocation Scheme.
What are the usage policies?
The usage policies for FlashLite are still being developed. Watch this space.
What’s been bought?
FlashLite is a multi-node cluster being purchased from XENON Systems that is comprised of the following sub-systems:
- 68 x compute nodes
- 2 x login nodes and 2 x administration nodes
- dual rail 56Gbps Mellanox infiniband fabric
- non-blocking within groups of 24 nodes
- 2:1 blocking factor between groups of 24 nodes
- ScaleMP vSMP software that aggregates multiple nodes into "super nodes" with larger memory/CPU/disk/IO than the individual nodes
- ROCKS cluster management software
- Torque + Maui batch system
- 150+ TB of high performance storage connected via NFS into IB fabric.
Each compute node has the following attributes:
- 2 x Xeon E5-2680v3 2.5GHz 12core Haswell processors with 30MB Smart Cache
- 16 x 32GB DDR4-2133 ECC LRDIMM modules – total 512GB (256GB per socket)
- 2 x 500GB 2.5" 7.2K HDDs as RAID 1 system disk
- 3 x 1.6TB Intel P3600 2.5" NVMe (SSD) drives for local data storage
- 2 x Mellanox Connect-IB 56Gb/s FDR Single Port Infiniband PCIe3 x8 adapter.
The login nodes are identical to the compute nodes except that they have:
- 0 x 1.6TB Intel P3600 2.5" NVMe (SSD) drives
- 4 x 480GB Intel S3500 SSD drives for local data storage.
What is a "supernode"?
Compute nodes in FlashLite can be flexibly aggregated together into larger "supernodes" using ScaleMP’s vSMP software.
This software aggregates multiple physically separate servers into one single virtual high-end system.
Such a “vSMP supernode” aggregates the CPUs, memory, and I/O capabilities of multiple physical hosts into one virtual machine (VM).
The upper limits to the configurations of our supernodes are:
- maximum of 4 supernodes
- maximum of 16TB aggregate RAM
- maximum of 1056 aggregate cores (88 processors).
For example, one such configuration of FlashLite might see:
- 1 x 8TB RAM supernode with 384 cores and 76.8TB of SSDs (16 physical nodes)
- 1 x 4TB RAM supernode with 192 cores and 38.4TB of SSDs (8 physical nodes)
- 2 x 2TB RAM supernodes with 96 cores and 19.2TB of SSDs (2x4 physical nodes).
Plus the remaining physical nodes running outside of vSMP - 36 x 0.5TB physical nodes 24 cores and 4.8TB of SSDs (36 physical nodes).
Given the network topology, supernodes of up to 24 physical compute nodes in one non-blocking group with 576 cores and 12TB RAM is optimal. It is possible, however, that some applications may be able to make effective use of a larger supernode that spans two non-blocking network groups (subject to the limits outlined above).
FlashLite in the media
The media coverage of the purchase of FlashLite has appeared in The Age and other Fairfax syndicated news sites, and also in ZDNet, ITWire and the Rust Report.