Difference between revisions of "Main Page"

From HPC
m (Monitoring tools)
m (Specifications)
Line 98: Line 98:
 
The HPC currently has the following compute specifications:
 
The HPC currently has the following compute specifications:
 
* 1x 80-core Intel Xeon E7-4850 @ 2.00GHz with 1024GB RAM, Infiniband interconnect
 
* 1x 80-core Intel Xeon E7-4850 @ 2.00GHz with 1024GB RAM, Infiniband interconnect
* 3x 16-core Intel Xeon E5530 @ 2.40GHz with 24GB RAM
+
* 3x 48-core Intel Xeon E5-2650 v4 @ 2.20GHz with 512GB RAM, Infiniband interconnect
 
* 2x 64-core AMD Opteron 6274 @ 2.20GHz with 128GB RAM, Infiniband interconnect
 
* 2x 64-core AMD Opteron 6274 @ 2.20GHz with 128GB RAM, Infiniband interconnect
 
* 2x 24-core Intel Xeon X5650 @ 2.67GHz with 48GB RAM, Infiniband interconnect
 
* 2x 24-core Intel Xeon X5650 @ 2.67GHz with 48GB RAM, Infiniband interconnect
 +
* 3x 16-core Intel Xeon E5530 @ 2.40GHz with 24GB RAM
 
   
 
   
The total is 304 available cores.
+
The total is 448 available cores.
  
 
=== Job priorities ===
 
=== Job priorities ===

Revision as of 11:04, 18 August 2016

HPC1 (also known as Rhasatsha)

rhasatsha: (the rha is pronounced as gaan in Afrikaans) a clever person/object; highly intelligent; something that acts promptly; a wide awake person/object who/that is always on the spot; a versatile person/object that can tackle anything successfully.

The cluster currently has the following specifications.

General information

Feel free to contact Charl Möller (x9490) with any queries regarding the cluster.

Monitoring tools

Specifications

The HPC currently has the following compute specifications:

  • 19x 8-core Intel Xeon E5440 @ 2.83GHz with 16GB RAM
  • 1x 24-core Intel Xeon X5650 @ 2.67GHz with 24GB RAM
  • 1x 16-core Intel Xeon X5550 @ 2.67GHz with 48GB RAM, dual NVIDIA GT200GL
  • 17x 48-core AMD Opteron 6172 @ 2.10GHz with 96GB RAM, Infiniband interconnect
  • 2x 64-core AMD Opteron 6274 @ 2.20GHz with 128GB RAM, Infiniband interconnect
  • 8x 8-core Intel Xeon X5450 @ 3.00GHz with 32GB RAM
  • 2x 8-core Intel Xeon X5450 @ 3.00GHz with 24GB RAM
  • 1x 64-core AMD Opteron 6366 HE @ 1.8GHz with 128GB RAM, Infiniband interconnect
  • 2x 48-core Intel Xeon E5-2670 v3 @ 2.30GHz with 512GB RAM, Infiniband interconnect

The total is 1328 available cores.

Job priorities

The HPC currently has 5 queues into which jobs are automatically divided based on walltime requested.

  • short - queue for jobs running up to 2 hours (#PBS -l walltime=2:00:00)
  • day - queue for jobs running up to 24 hours (#PBS -l walltime=24:00:00)
  • week - queue for jobs running up to 7 days (#PBS -l walltime=168:00:00)
  • month - queue for jobs running up to 31 days (#PBS -l walltime=744:00:00)
  • long - queue for jobs running longer than 31 days

At any given time every queue is only allowed a maximum number of cores to ensure quick jobs aren't unnecessarily blocked by long running jobs.

  • short - unlimited, highest priority
  • day - unlimited
  • week - 800 cores, burstable to 1000 if cluster is idle
  • month - 500 cores (burstable to 600), maximum of 10 jobs per user (burstable to 20), maximum of 200 cores per user (burstable to 300)
  • long - 400 cores (burstable to 600), maximum of 10 jobs per user (burstable to 20), maximum of 200 cores per user (burstable to 300)

It is imperative that you accurately estimate your job's running time. Estimate too high, and you may find yourself in an unfavourable queue. Estimate too low and the job will be killed by the system when the walltime is reached. Once a job is running, only the administrator can increase the walltime.

Any job that does not specify a walltime will be assigned a default of 5 minutes.

Acceptable usage

Hierdie stelsel mag alleen gebruik word vir bona fide akademiese doeleindes. Enige poging dit vir konsultasiewerk of enige kommersiële doeleindes te gebruik, kan veroorsaak dat die gebruiker permanent verbied word om die stelsel te gebruik.

This system may only be used for bona fide academic work. Any effort to use it for consultancy work or any other commercial purpose may lead to the permanent banning of the user from the system.

Citations

We require an acknowledgement in any thesis, paper, publication or presentation that references results computed on this system. In addition we would like to be able to reference these published works.

Suggested form of acknowledgement:

Computations were performed using the University of Stellenbosch's Rhasatsha HPC: http://www.sun.ac.za/hpc

CAF HPC1

General information

Feel free to contact Charl Möller (x9490) with any queries regarding the cluster.

Monitoring tools

Specifications

The HPC currently has the following compute specifications:

  • 1x 80-core Intel Xeon E7-4850 @ 2.00GHz with 1024GB RAM, Infiniband interconnect
  • 3x 48-core Intel Xeon E5-2650 v4 @ 2.20GHz with 512GB RAM, Infiniband interconnect
  • 2x 64-core AMD Opteron 6274 @ 2.20GHz with 128GB RAM, Infiniband interconnect
  • 2x 24-core Intel Xeon X5650 @ 2.67GHz with 48GB RAM, Infiniband interconnect
  • 3x 16-core Intel Xeon E5530 @ 2.40GHz with 24GB RAM

The total is 448 available cores.

Job priorities

The HPC currently has 5 queues into which jobs are automatically divided based on walltime requested.

  • short - queue for jobs running up to 2 hours (#PBS -l walltime=2:00:00)
  • day - queue for jobs running up to 24 hours (#PBS -l walltime=24:00:00)
  • week - queue for jobs running up to 7 days (#PBS -l walltime=168:00:00)
  • month - queue for jobs running up to 31 days (#PBS -l walltime=744:00:00)
  • long - queue for jobs running longer than 31 days

At any given time every queue is only allowed a maximum number of cores to ensure quick jobs aren't unnecessarily blocked by long running jobs.

  • short - unlimited, highest priority
  • day - unlimited
  • week - 200 cores, burstable to 300 if cluster is idle
  • month - 100 cores (burstable to 200), maximum of 3 jobs per user (burstable to 5), maximum of 50 cores per user (burstable to 100)
  • long - 100 cores (burstable to 200), maximum of 3 jobs per user (burstable to 5), maximum of 50 cores per user (burstable to 100)

It is imperative that you accurately estimate your job's running time. Estimate too high, and you may find yourself in an unfavourable queue. Estimate too low and the job will be killed by the system when the walltime is reached. Once a job is running, only the administrator can increase the walltime.

Any job that does not specify a walltime will be assigned a default of 5 minutes.