[gridengine users] complex resource with pe issue

Reuti reuti at staff.uni-marburg.de
Tue Jul 23 17:38:04 UTC 2013


Am 23.07.2013 um 15:16 schrieb Arnau Bria:

> Hi again,
> 
> I've been investigating a little more after reading the doc again:
> http://docs.oracle.com/cd/E19080-01/n1.grid.eng6/817-5677/6ml49n2bf/
> 
> if I use h_fsize instead of my new "disk_space" complex, the
> behavior is different (?¿?¿?). Does it make any sense? The fact of
> using a default host resource attribute could be a reason for this
> change?
> 
> # qconf -se node-hp0512|grep complex
> complex_values        h_vmem=192G,virtual_free=192G,h_fsize=300G
> 
> The first job:
> $ echo sleep 600 |qsub  -q short at node-hp0512 -l
> h_fsize=200G
> 
> runs perfectly.
> 
> the second one, with the same qsub command, is queued becasue:
> 
> cannot run in PE "smp" because it only offers 0 slots
> 
> Which is not the expected error:
> 
> it offers only hc:h_fsize=XXXX

But it was a parallel job you submitted?

The output for a parallel job is (unfortunately) often misleading, as the zero slot count is the consequence of the h_fsize limit not offering enough slots.

-- Reuti


> This is really strange I have "virtual_free" defined and works as
> expected, giving the correct error when no more memory is available:
> 
> because it offers only hc:virtual_free=45097156608.000000
> 
> even if I submit parallel jobs, but when trying the same with this new
> resource, SGE is not behaiving in the same way...
> 
> What am I missing here?
> 
> disk_space            disk       MEMORY      <=    YES         JOB        0        0
> h_fsize               h_fsize    MEMORY      <=    YES         JOB        0        0
> virtual_free          vf         MEMORY      <=    YES         JOB        0        0
> 
> 
> 
> *I've changed disk_space by h_fsize in this test, but nodes did have
> disk_space defined as a complex.
> 
> Arnau
> 
> _______________________________________________
> users mailing list
> users at gridengine.org
> https://gridengine.org/mailman/listinfo/users
> 




More information about the users mailing list