Difference between revisions of "HOWTO check up on jobs"

From HPC
m
m (Deleting a job you no longer want)
Line 35: Line 35:
 
</pre>
 
</pre>
  
There's no output on a successful job deletion. Keep in mind that running jobs are killed, '''files in scratch space will not sync back to your home directory''' and that '''scratch space will not be cleaned'''. If you delete running jobs that use scratch space, please let the administrator know to check for dirty scratch spaces.
+
There's no output on a successful job deletion. Keep in mind that when running jobs are killed, '''files in scratch space will not sync back to your home directory'''. Orphaned scratch space will be moved to /scratch2.
  
 
== Overview of cluster usage ==
 
== Overview of cluster usage ==

Revision as of 15:33, 15 March 2016

Examining the queue

You can look at the queue by using the qstat command. qstat will display the queue ordered by JobID.

[username@launch ~]$ qstat
Job id            Name             User              Time Use S Queue
----------------  ---------------- ----------------  -------- - -----
32.pbsserver      JobName          username          351:04:3 R long
33.pbsserver      JobName          username          351:06:1 R day
34.pbsserver      JobName          username          390:30:2 R week
40.pbsserver      JobName          username          496:38:2 R month
46.pbsserver      JobName          username          506:13:5 R long

Checking a specific job

If you want to see the details of a specific job, use qstat -f <JobID> on it:

[username@launch ~]$ qstat -f 40

If you want to look at the output of your job while it's still running, use the qpeek command.

[username@launch ~]$ qpeek 40

Deleting a job you no longer want

If you want to delete a job (whether it's already running or not), use the qdel command:

[username@launch ~]$ qdel 41

There's no output on a successful job deletion. Keep in mind that when running jobs are killed, files in scratch space will not sync back to your home directory. Orphaned scratch space will be moved to /scratch2.

Overview of cluster usage

pestat gives a nice overview of which nodes are busy with which jobs for which users.

[username@launch ~]$ pestat
Queues:  short day week month long
Node            state    cpu        memory   jobids/users
----                   tot used    tot used
comp001.hpc     busy     8    8    15G  51%  34
comp002.hpc     free    64   60   126G  12%  35 36 37 38
[username@launch ~]$ pestat -u username
Queues:  short day week month long
Node            state    cpu        memory   jobids/users
----                   tot used    tot used
comp001.hpc     busy     8    8    15G  51%  34
comp002.hpc     free    64   60   126G  12%  38
[username@launch ~]$ pestat -a -u username
Queues:  short day week month long
Node            state    cpu        memory   jobids/users
----                   tot used    tot used
comp001.hpc     busy     8    8    15G  51%  34 username
comp002.hpc     free    64   60   126G  12%  38 username