[gridengine users] Requesting h_vmem or memfree
chekh at stanford.edu
Fri May 25 20:17:14 UTC 2012
On 05/25/2012 11:34 AM, Prentice Bisbal wrote:
> Okay, this going to be a stupid question coming from someone who's been
> on this list for years, but here goes...
> I've just upgraded the RAM on a few cluster nodes to 32 GB (instead of
> 16 GB). A few users could benefit from this, so I'd like to be able to
> specify h_vmem or mem_free or s_vmem for jobs so that jobs requiring RAM
>> 16 GB go to this node. Digging through the google results, It looks
> like I need to do the following:
> 1. Make whatever complex I choose to use requestable
> 2. Set a default value.
> 3. Profit.
Correct, make mem_free requestable and consumable, probably per job
rather than per slot and set a default request value. (qconf -mc)
Add mem_free=XXG to y (qconf -me hostname) or e.g.
qconf -aattr exechost complex_values mem_free=32G node15
> It's step 2 that worries me. Must I set a default? My assumption is that
> if I do, that default will be used for ALL jobs. Is that correct? I'm
> worried because something tells me that no matter what default I use,
> it's going to break something, somewhere for someone. Is it possible to
> set default = NONE?
If you set it to 0, it's the same as not having it enabled at all, no?
> Second question, is which resource is the best to use for this? h_vmem,
> s_vmem, or mem_free. I know mem_free is more of a "hint", and doesn't
> guarantee anything, and h_vmem will kill a job submitter guesses wrong.
> I've never seen anyone even mention s_vmem for this. Is there a reason
> for that?
I'm not familiar with s_vmem. h_vmem maps directly to "ulimit -v" under
Linux. mem_free is used only at scheduling time, the scheduler will use
the lower of the consumable complex mem_free or the actual mem_free of
the host. The former is the hc:mem_free in output of 'qstat -F', the
latter is mem_free in 'qconf -se'.
Alex Chekholko chekh at stanford.edu
More information about the users