New high memory servers for Galaxy Australia

business-2078_1280.jpeg

Researchers’ capacity to analyse their life science data was boosted this week when high memory compute servers for Galaxy Australia came online at the University of Melbourne.

Almost 500 researchers from the University of Melbourne and the surrounding precinct already use the Galaxy Australia platform for data integration and analysis. Some of their work regularly challenges the platform by using tools with particularly high memory demands like Mothur, Trinity, Canu and BLAST which were until now unsupported by Galaxy Australia.

The new high memory virtual machines will see researchers pushing new limits and open opportunities to access powerful tools, including those for machine learning, cheminformatic analysis and long read sequencing. The large-capacity and high-performance local storage the new servers provide is itself a new capability that will help accelerate specific types of workloads. Researchers that are currently running particularly high memory tools come from a wide variety of institutions including the Florey Institute of Neuroscience and Mental Health, University of Tasmania and the Royal Botanic Gardens, Victoria. We look forward to sharing examples of specific research projects benefiting from the servers once they are in use.

Galaxy Australia consists of a single head-node site and multiple satellite ‘Pulsar nodes’. The Head node contains all the central infrastructure required to run the Galaxy platform, whereas the Pulsar nodes provide distributed compute to parallelise job requests. User jobs are executed at the Head node or Pulsar nodes based on the combination of dataset size and tool.

As the host of the Australian BioCommons hub and key partner in Galaxy Australia, Melbourne Bioinformatics was perfectly placed to coordinate this significant capital investment from the ARDC into the Melbourne node of the Nectar Research Cloud. 

The Pulsar nodes operated by the University of Melbourne and QCIF have both received support and investment from the Australian Research Data Commons (ARDC) and Australian BioCommons to procure new servers with 2 or 4 terabytes of memory. Two 128 core servers, one with 2 TB and one with 4 TB of  RAM, have been installed on the University of Melbourne node of the ARDC Nectar Research Cloud with the assistance of the University’s Research Computing Services team. The high memory servers for the QCIF Galaxy Pulsar node will be commissioned in the coming months.