Difference between revisions of "Main Page"

From HPC
m (Specifications)
 
(72 intermediate revisions by 3 users not shown)
Line 1: Line 1:
Trim fulfills yearnings is not a key work. This particularly ideal for school people who generally have a separating measure of diverse offers set upon them by their schools. Definitely with web sorting out participations, for instance, paper making help, it is not continually key for an understudy to make astonishing articles. What makes paper making an irritating dream generally understudies?
+
The University of Stellenbosch hosts multiple HPCs (High Performance Computing clusters). This wiki provides information on the two largest systems, HPC1 and HPC2.
  
Forming a work is an extended and appalling process that is sanitizing physically, excitedly and sometimes even fiscally. Understudies who are controlling proposal every once in a while experience times when their conviction runs low. Conditions, for instance, this have a tendency to make the master requirement to surrender. For an understudy to succeed in such a condition he or she needs to be exceedingly proceeding.
+
= HPC1 (also known as Rhasatsha) =
  
In this [http://www.thesisontime.com/ latest blog post], you would get best information about experts. Stream life much the same as that of work masters can make you delicate. As understudies experience school their existing conditions might perpetually get to be suitably standard. People from time to time continually start to recognize that they are completing all that they can when wearing down their papers. The center when things have all the stores of being not to be filling in as made, specialists are typically dead orchestrated to recognize that completing more isn't worth the exertion.  
+
HPC1 is available to all users registered  on campus. In essence, if you have a network login, you can use this HPC.
  
The best approach to succeeding is to continually continue testing yourself. An autonomous should begin the satisfying oblige in his or her life. The learner ought not stop there; he or she then need to hold quick to this force on an ordinary support.  
+
All users are granted 1000 CPU hours to test the system and determine its usefulness. Once the 1000 hour quota is depleted, users are required to pay a registration fee to gain unlimited access.
  
Getting directly to the point in life out of school this is a pick that is effectively material. For example a smoker does not oblige an exchange individual to let him know or her not to smoke. All that he or she needs is solicitation; If for instance, today, the smoker smoked a pack, tomorrow he or she could endeavor to hack down to eighteen smokes. This can then be hacked down to seventeen smokes the running with day.  
+
Free users are granted 1000 CPU hours and a 10GB disk quota. Paid users are granted unlimited CPU and a 1TB disk quota.
  
Nobody knows you better that you know yourself. Just you know the change that you require. Much the same as you don't require your mum to release you from ricocheting from a sixty-story making, so why might a learner require some single person to let him know or her not to do pills, to study hard, devour up a sensible devouring regimen and not to carelessness your suggestion?
+
See [[HOWTO register]] for details on how to register.
  
Since you know completely what you have to do, do it without the scarcest vacillating. Don't stop until you get to the place that you wish to be. Continue going till you are the spot you wish to be.
+
== General information ==
 +
 
 +
Please direct enquires to help@sun.ac.za.
 +
 
 +
* [[HOWTO register]]
 +
* [[HOWTO login]]
 +
* [[HOWTO submit jobs]]
 +
* [[HOWTO check up on jobs]]
 +
* [[Useful commands]]
 +
 
 +
* [[Common errors]]
 +
 
 +
'''rhasatsha''': (the rha is pronounced as gaan in Afrikaans) a clever person/object; highly intelligent; something that acts promptly; a wide awake person/object who/that is always on the spot; a versatile person/object that can tackle anything successfully.
 +
 
 +
=== Monitoring tools ===
 +
 
 +
* [https://hpc1-manager.sun.ac.za/ganglia/ Ganglia cluster monitor] (only available on campus)
 +
* [https://hpc1-manager.sun.ac.za/munin/ Munin health monitor] (only available on campus)
 +
* [https://hpc1-manager.sun.ac.za:8444/ XDMoD usage explorer] (only available on campus)
 +
 
 +
=== Specifications ===
 +
 
 +
The HPC currently has the following compute specifications:
 +
* 11x 8-core Intel Xeon E5440 @ 2.83GHz with 16GB RAM
 +
* 17x 48-core AMD Opteron 6172 @ 2.10GHz with 96GB RAM, Infiniband interconnect
 +
* 2x 64-core AMD Opteron 6274 @ 2.20GHz with 128GB RAM, Infiniband interconnect
 +
* 8x 8-core Intel Xeon X5450 @ 3.00GHz with 32GB RAM
 +
* 2x 8-core Intel Xeon X5450 @ 3.00GHz with 24GB RAM
 +
* 1x 64-core AMD Opteron 6366 HE @ 1.8GHz with 128GB RAM, Infiniband interconnect
 +
* 2x 48-core Intel Xeon E5-2670 v3 @ 2.30GHz with 512GB RAM, Infiniband interconnect
 +
 +
The total is 1272 available cores. Note we have managed to increase the total core count to 2344 since this was published.
 +
 
 +
=== Job priorities ===
 +
 
 +
The HPC currently has 5 queues into which jobs are automatically divided based on walltime requested.
 +
* '''short''' - queue for jobs running up to 2 hours (#PBS -l walltime=2:00:00)
 +
* '''day''' - queue for jobs running up to 24 hours (#PBS -l walltime=24:00:00)
 +
* '''week''' - queue for jobs running up to 7 days (#PBS -l walltime=168:00:00)
 +
* '''month''' - queue for jobs running up to 31 days (#PBS -l walltime=744:00:00)
 +
* '''long''' - queue for jobs running longer than 31 days
 +
 
 +
At any given time every queue is only allowed a maximum number of cores to ensure quick jobs aren't unnecessarily blocked by long running jobs.
 +
* '''short''' - unlimited cores, highest priority
 +
* '''day''' - unlimited cores
 +
* '''week''' - maximum of 1000 cores
 +
* '''month''' - maximum of 600 cores, maximum of 20 jobs per user, maximum of 400 cores per user
 +
* '''long''' - maximum of 500 cores, maximum of 20 jobs per user, maximum of 300 cores per user
 +
 
 +
It is imperative that you accurately estimate your job's running time. Estimate too high, and you may find yourself in an unfavourable queue. Estimate too low and the job will be killed by the system when the walltime is reached. Once a job is running, only the administrator can increase the walltime. All jobs are required to specify a walltime.
 +
 
 +
Furthermore, all interactive jobs will be satisfied by the '''test''' queue and will be limited to a maximum of 8 cores and walltime of 24 hours.
 +
 
 +
== Acceptable usage ==
 +
 
 +
This system may only be used for '''bona fide academic work'''. Any effort to use it for consultancy work or any other commercial purpose may lead to the permanent banning of the user from the system.
 +
 
 +
== Citations ==
 +
 
 +
We require an acknowledgement in any thesis, paper, publication or presentation that references results computed on this system. In addition we would like to be able to reference these published works.
 +
 
 +
Suggested form of acknowledgement:
 +
<pre>
 +
Computations were performed using the University of Stellenbosch's HPC1 (Rhasatsha): http://www.sun.ac.za/hpc
 +
</pre>
 +
 
 +
= HPC2 =
 +
 
 +
HPC2 is only available to registered users.
 +
 
 +
See [[HOWTO register]] for details on how to register.
 +
 
 +
== General information ==
 +
 
 +
Feel free to contact [mailto:gerhardv@sun.ac.za Gerhard Van Wageningen] (x4554) with any queries regarding the cluster.
 +
 
 +
* [[HOWTO register]]
 +
* [[HOWTO login]]
 +
* [[HOWTO submit jobs]]
 +
* [[HOWTO check up on jobs]]
 +
* [[Useful commands]]
 +
 
 +
* [[Common errors]]
 +
 
 +
=== Monitoring tools ===
 +
 
 +
Not currently available
 +
 
 +
=== Specifications ===
 +
 
 +
The HPC currently has the following compute specifications:
 +
* 1x 80-core Intel Xeon E7-4850 @ 2.00GHz with 1024GB RAM, Infiniband interconnect
 +
* 3x 48-core Intel Xeon E5-2650 v4 @ 2.20GHz with 512GB RAM, Infiniband interconnect
 +
* 2x 64-core AMD Opteron 6274 @ 2.20GHz with 128GB RAM, Infiniband interconnect
 +
* 2x 24-core Intel Xeon X5650 @ 2.67GHz with 48GB RAM, Infiniband interconnect
 +
* 3x 16-core Intel Xeon E5530 @ 2.40GHz with 24GB RAM
 +
* 1x Dell R910 80-core Intel Xeon 4850 @ 2.0GHz 1024GB RAM, Infiniband
 +
* 4x Dell R730 48-core Intel Xeon 2650 @ 2.2GHz 256GB, 504GB, 504GB, 756GB RAM, Infiniband interconnect
 +
* 1x Dell R740 72-core Intel Zeon 6254 @ 3.1GHz 1.5TB RAM, Infiniband interconnect
 +
* 1x Dell R640 72-core Intel Xeon 6254 @ 3.1GHz 1.5TB RAM, Infiniband interconnect
 +
 
 +
The total is 672 available cores.
 +
 
 +
=== Job priorities ===
 +
 
 +
The HPC currently has 5 general CPU queues into which jobs are automatically divided based on walltime requested.
 +
* '''short''' - queue for jobs running up to 2 hours (#PBS -l walltime=2:00:00)
 +
* '''day''' - queue for jobs running up to 24 hours (#PBS -l walltime=24:00:00)
 +
* '''week''' - queue for jobs running up to 7 days (#PBS -l walltime=168:00:00)
 +
* '''month''' - queue for jobs running up to 31 days (#PBS -l walltime=744:00:00)
 +
* '''long''' - queue for jobs running longer than 31 days
 +
 
 +
At any given time every queue is only allowed a maximum number of cores to ensure quick jobs aren't unnecessarily blocked by long running jobs.
 +
* '''short''' - unlimited, highest priority
 +
* '''day''' - unlimited
 +
* '''week''' - 450 cores, burstable to 500 if cluster is idle
 +
* '''month''' - 200 cores (burstable to 300), maximum of 3 jobs per user (burstable to 5), maximum of 100 cores per user (burstable to 200)
 +
* '''long''' - 100 cores (burstable to 200), maximum of 3 jobs per user (burstable to 5), maximum of 50 cores per user (burstable to 100)
 +
 
 +
It is imperative that you accurately estimate your job's running time. Estimate too high, and you may find yourself in an unfavourable queue. Estimate too low and the job will be killed by the system when the walltime is reached. Once a job is running, only the administrator can increase the walltime.
 +
 
 +
Any job that does not specify a walltime will be assigned a default of '''5 minutes'''.
 +
 
 +
 
 +
== Citations ==
 +
 
 +
We require an acknowledgement in any thesis, paper, publication or presentation that references results computed on this system. In addition we would like to be able to reference these published works.
 +
 
 +
Suggested form of acknowledgement:
 +
<pre>
 +
Computations were performed using the University of Stellenbosch's  HPC2: http://www.sun.ac.za/hpc
 +
</pre>

Latest revision as of 15:49, 10 May 2023

The University of Stellenbosch hosts multiple HPCs (High Performance Computing clusters). This wiki provides information on the two largest systems, HPC1 and HPC2.

HPC1 (also known as Rhasatsha)

HPC1 is available to all users registered on campus. In essence, if you have a network login, you can use this HPC.

All users are granted 1000 CPU hours to test the system and determine its usefulness. Once the 1000 hour quota is depleted, users are required to pay a registration fee to gain unlimited access.

Free users are granted 1000 CPU hours and a 10GB disk quota. Paid users are granted unlimited CPU and a 1TB disk quota.

See HOWTO register for details on how to register.

General information

Please direct enquires to help@sun.ac.za.

rhasatsha: (the rha is pronounced as gaan in Afrikaans) a clever person/object; highly intelligent; something that acts promptly; a wide awake person/object who/that is always on the spot; a versatile person/object that can tackle anything successfully.

Monitoring tools

Specifications

The HPC currently has the following compute specifications:

  • 11x 8-core Intel Xeon E5440 @ 2.83GHz with 16GB RAM
  • 17x 48-core AMD Opteron 6172 @ 2.10GHz with 96GB RAM, Infiniband interconnect
  • 2x 64-core AMD Opteron 6274 @ 2.20GHz with 128GB RAM, Infiniband interconnect
  • 8x 8-core Intel Xeon X5450 @ 3.00GHz with 32GB RAM
  • 2x 8-core Intel Xeon X5450 @ 3.00GHz with 24GB RAM
  • 1x 64-core AMD Opteron 6366 HE @ 1.8GHz with 128GB RAM, Infiniband interconnect
  • 2x 48-core Intel Xeon E5-2670 v3 @ 2.30GHz with 512GB RAM, Infiniband interconnect

The total is 1272 available cores. Note we have managed to increase the total core count to 2344 since this was published.

Job priorities

The HPC currently has 5 queues into which jobs are automatically divided based on walltime requested.

  • short - queue for jobs running up to 2 hours (#PBS -l walltime=2:00:00)
  • day - queue for jobs running up to 24 hours (#PBS -l walltime=24:00:00)
  • week - queue for jobs running up to 7 days (#PBS -l walltime=168:00:00)
  • month - queue for jobs running up to 31 days (#PBS -l walltime=744:00:00)
  • long - queue for jobs running longer than 31 days

At any given time every queue is only allowed a maximum number of cores to ensure quick jobs aren't unnecessarily blocked by long running jobs.

  • short - unlimited cores, highest priority
  • day - unlimited cores
  • week - maximum of 1000 cores
  • month - maximum of 600 cores, maximum of 20 jobs per user, maximum of 400 cores per user
  • long - maximum of 500 cores, maximum of 20 jobs per user, maximum of 300 cores per user

It is imperative that you accurately estimate your job's running time. Estimate too high, and you may find yourself in an unfavourable queue. Estimate too low and the job will be killed by the system when the walltime is reached. Once a job is running, only the administrator can increase the walltime. All jobs are required to specify a walltime.

Furthermore, all interactive jobs will be satisfied by the test queue and will be limited to a maximum of 8 cores and walltime of 24 hours.

Acceptable usage

This system may only be used for bona fide academic work. Any effort to use it for consultancy work or any other commercial purpose may lead to the permanent banning of the user from the system.

Citations

We require an acknowledgement in any thesis, paper, publication or presentation that references results computed on this system. In addition we would like to be able to reference these published works.

Suggested form of acknowledgement:

Computations were performed using the University of Stellenbosch's HPC1 (Rhasatsha): http://www.sun.ac.za/hpc

HPC2

HPC2 is only available to registered users.

See HOWTO register for details on how to register.

General information

Feel free to contact Gerhard Van Wageningen (x4554) with any queries regarding the cluster.

Monitoring tools

Not currently available

Specifications

The HPC currently has the following compute specifications:

  • 1x 80-core Intel Xeon E7-4850 @ 2.00GHz with 1024GB RAM, Infiniband interconnect
  • 3x 48-core Intel Xeon E5-2650 v4 @ 2.20GHz with 512GB RAM, Infiniband interconnect
  • 2x 64-core AMD Opteron 6274 @ 2.20GHz with 128GB RAM, Infiniband interconnect
  • 2x 24-core Intel Xeon X5650 @ 2.67GHz with 48GB RAM, Infiniband interconnect
  • 3x 16-core Intel Xeon E5530 @ 2.40GHz with 24GB RAM
  • 1x Dell R910 80-core Intel Xeon 4850 @ 2.0GHz 1024GB RAM, Infiniband
  • 4x Dell R730 48-core Intel Xeon 2650 @ 2.2GHz 256GB, 504GB, 504GB, 756GB RAM, Infiniband interconnect
  • 1x Dell R740 72-core Intel Zeon 6254 @ 3.1GHz 1.5TB RAM, Infiniband interconnect
  • 1x Dell R640 72-core Intel Xeon 6254 @ 3.1GHz 1.5TB RAM, Infiniband interconnect

The total is 672 available cores.

Job priorities

The HPC currently has 5 general CPU queues into which jobs are automatically divided based on walltime requested.

  • short - queue for jobs running up to 2 hours (#PBS -l walltime=2:00:00)
  • day - queue for jobs running up to 24 hours (#PBS -l walltime=24:00:00)
  • week - queue for jobs running up to 7 days (#PBS -l walltime=168:00:00)
  • month - queue for jobs running up to 31 days (#PBS -l walltime=744:00:00)
  • long - queue for jobs running longer than 31 days

At any given time every queue is only allowed a maximum number of cores to ensure quick jobs aren't unnecessarily blocked by long running jobs.

  • short - unlimited, highest priority
  • day - unlimited
  • week - 450 cores, burstable to 500 if cluster is idle
  • month - 200 cores (burstable to 300), maximum of 3 jobs per user (burstable to 5), maximum of 100 cores per user (burstable to 200)
  • long - 100 cores (burstable to 200), maximum of 3 jobs per user (burstable to 5), maximum of 50 cores per user (burstable to 100)

It is imperative that you accurately estimate your job's running time. Estimate too high, and you may find yourself in an unfavourable queue. Estimate too low and the job will be killed by the system when the walltime is reached. Once a job is running, only the administrator can increase the walltime.

Any job that does not specify a walltime will be assigned a default of 5 minutes.


Citations

We require an acknowledgement in any thesis, paper, publication or presentation that references results computed on this system. In addition we would like to be able to reference these published works.

Suggested form of acknowledgement:

Computations were performed using the University of Stellenbosch's  HPC2: http://www.sun.ac.za/hpc