Blog from May, 2012

A new login server has been setup at login.uoa.nesi.org.nz for use as a submission and build node for the NeSI Pan cluster.

As of Monday 28 May, login1.uoa.nesi.org.nz wil no longer be available as a submission and build node.

Gricli will also be made available on Monday 28 May on the new login server.

The most recent version of the Amber molecular dynamics suite, Amber 12, is now available on Pan.

There are two versions available. One, the conventional (CPU) version, has been built using the PGI compilers. The other, the GPU version built with the Intel compilers, uses cutting-edge CUDA technology for very fast molecular dynamics.

For a usage guide, see Amber 12 usage.

The latest version of PGI C/C++ and Fortran compilers are now available on the NeSI cluster.
The compilers are installed in the /share/apps/pgi directory and include both 32- and 64-bit linux versions.

To prepare the environment for the 64-bit Linux version:

module load pgi/12.4/64bit

Similarly, for the 32-bit Linux version:

module load pgi/12.4/32bit

Also the Intel Composer XE 2011 compilers are now available on the NeSI cluster.
The compiler pack includes the Intel C, C++ and Fortran compilers and is installed in the directory /share/apps/intel/2011.
Before running the Intel compilers, it is strongly recommended to configure your shell environment. You can do so by running one of the following scripts:
For the 64-bit compiler:

module load intel/2011-64bit

For the 32-bit compiler:

module load intel/2011-32bit

We're pleased to announce that we've installed a new version of the OpenMPI parallel computing framework. This new version (1.5.5) will be available alongside existing versions.

This new version includes explicit support for LoadLeveler (the scheduler used on Pan), as well as MPI 2 with spawning functionality.

At present, we have not made 1.5.5 the system default; we've left the default at 1.4.3 for the sake of compatibility with previously built applications.

To switch to OpenMPI 1.5.5, you should to complete the following steps:

  1. Run the program "mpi-selector-menu" at your command prompt, and, when prompted, select the option for openmpi_gcc-1.5.5, for "user" (as opposed to "system").
  2. Log out and log back in.

Please note that applications built against other versions of MPI will need to be recompiled against OpenMPI 1.5.5, otherwise OpenMPI 1.5.5 will not run them correctly.

Also, in LoadLeveler job files, only if using OpenMPI 1.5.5:

  • The job_type directive should be set to "MPICH" instead of "parallel"
  • Your executable must still be run through mpirun, but mpirun will no longer require the "-hostfile ${LOADL_HOSTFILE}" command-line option to mpirun
VASP installed

The computational chemistry software VASP is now installed and available for licensed users.

To access VASP, load the module "VASP/4.6".

This page provides some brief instructions on how to use VASP.

At 10:00 a.m. on Monday 28 May, we plan on shutting down the NeSI cluster (Pan) for scheduled maintenance. The shutdown period is expected to last for up to 24 hours. Normal operations should resume by 10:00 a.m. on Tuesday 29 May.

During this time, no cluster services will be available, including login and access to saved data. Any jobs that are still running at the start of the maintenance period will be terminated.

System upgrade

We will be taking the opportunity to upgrade the operating system on Pan, from Red Hat Enterprise Linux (RHEL) 5 to RHEL 6. This upgrade will provide a more stable and secure computing environment, with better support for new research applications. It may, however, affect some programs that were built under RHEL 5. If your program starts behaving strangely, please contact us for assistance.

Data storage

At the same time, we will introduce data storage quotas. Most users will have an initial disk quota of 30 GB. If we have already agreed to provide you, or your group or institution, with additional storage, you will receive a higher quota at the outset. Extra disk space is usually available on request.

CeR compute update
Centre for eResearch compute for 2012:
  • NeSI Pan cluster (since 24th of January): 400 000+ core hours
  • Auckland BeSTGRID cluster: 600 000+ core hours
  • 'Cadaver': 100 000+ core hours (soon to be shut down)

We're pleased to announce that NeSI is now open for merit-funded and proposal development applications.

The Centre for eResearch is one of the three principal supercomputing sites for NeSI merit projects. Nearly half of our total computing capacity is preferentially available to merit-funded researchers. We also offer short-term allocations to researchers who wish to test or develop high-performance computing applications in preparation for full-scale research projects.  NIWA's High Performance Computing facility and BlueFern.

If you're a researcher who has received New Zealand government funding for a peer-reviewed scientific research proposal, and high-performance computing is part of the funded research, NeSI would like to hear from you! Even if not, you may be eligible for access under a different allocation class. Please see the NeSI research guidelines to find out more.

More information about NeSI, including the facilities and how to apply for supercomputing time, is online at http://www.nesi.org.nz.