[gridengine users] MPI jobs spanning several nodes and h_vmem limits

Dave Love d.love at liverpool.ac.uk
Wed Mar 6 18:33:45 UTC 2013

Reuti <reuti at staff.uni-marburg.de> writes:

>> I can't reproduce that (with openmpi tight integration).  Doing this
>> (which gets three four-core nodes):
>>  qsub -pe openmpi 12 -l h_vmem=256M
>>  echo "Script $(hostname): $TMPDIR $NSLOTS"
>>  ulimit -v
>>  for HOST in $(tail -n +2 $PE_HOSTFILE|cut -f1 -d' '); do
>>      qrsh -inherit $HOST 'echo "Call $(hostname): $TMPDIR $NSLOTS"; ulimit -v;
>>      sleep 60' &
>>  done
>>  wait
> Great, then you fixed it already for the actual version.

I'm puzzled because I don't recall a change in that area, and I'd have
expected to have noticed it with 6.2u5 in the past, but I'm happy,

Community Grid Engine:  http://arc.liv.ac.uk/SGE/

More information about the users mailing list