[gridengine users] define h_vmem via RQS

mahbube rustaee rustaee at gmail.com
Sun Nov 20 09:45:27 UTC 2011


On Sat, Nov 19, 2011 at 4:04 PM, Reuti <reuti at staff.uni-marburg.de> wrote:

> Hi,
>
> Am 19.11.2011 um 10:43 schrieb mahbube rustaee:
>
> > h_vmem is defined as consumable resource such:
> > h_vmem              h_vmem     MEMORY      <=    YES         YES
>  10G      0
> > and I'd like to define that for any host with limit value,
> > How can I define value for h_vmem parameter on any host without set
> config one by one?
>
> Per job
>
> The limits can be set on a queue level (for all hosts inside by the
> default value, per hostgroup or per machine) and will be enforced.
>
> The RQS will only check against the request of the user from the job
> submission. If he doesn't request h_vmem, no limits will be in effect.
>
> You could define the complex as FORCED to force the user to specify it at
> submission time. The request will be checked against the RQS and applied
> (as long as it's lower than the one in the queue configuration). So, if
> it's FORCED, it's not necessary to specify it at a queue level, as the user
> request will override it there anyway.
>
>
> Per exechost:
>
> You need to give SGE something from which the h_vmem can be subtracted.
> I.e. it needs to be defined per exechost:
>
> $ qconf -se node01
> ...
> complex_values h_vmem=999G
>
> You can set here an arbitrary value, as long as the RQS reflects the real
> one. The other option would be to specify slots and h_vmem directly
> according to the built in cores and memory. But it's usually a one time
> limit across each machine:
>
1)  In first option, does   following definition  limits h_vmem for hosts?
 limit        hosts {@amd} to h_vmem=100G

or
limit users {*} hosts {@amd} to h_vmem=100G
limits h_vmem for hosts?

2) two  mentioned defintion are different to limit h_vmem per hosts in @amd
hostgroup?

>
> $ for NODE in $(seq -w 1 20); do qconf -mattr exechost complex_values
> h_vmem=50G node$NODE; done
>
>
> It's a matter whether you like the output to be in `qquota`, or would like
> to have these limits applied and use the `qquopta` output only for
> user/groups/usage limits.
>
> -- Reuti
>
>
> > Thx
> >
> > On Sat, Nov 19, 2011 at 12:17 PM, William Hay <w.hay at ucl.ac.uk> wrote:
> > On 19 November 2011 05:03, mahbube rustaee <rustaee at gmail.com> wrote:
> > > Hi,
> > >
> > > I define slots of all hosts with:
> > > {
> > >    name         limit-slots-of-hosts
> > >    description  limits slots of clusters's hosts
> > >    enabled      TRUE
> > >    limit        hosts {@gpu} to slots=48
> > >    limit        hosts {@xeon} to slots=24
> > >    limit        hosts {@amd} to slots=48
> > > }
> > > And it works correctly.
> > >
> > > I defined a RQS to set h_vmem to all hosts :
> > > {
> > >    name         limit-vmem-of-all-hosts
> > >    description  limits vmem of all hosts
> > >    enabled      TRUE
> > >    limit        hosts {@gpu} to h_vmem=50G
> > >    limit        hosts {@xeon} to h_vmem=20G
> > >    limit        hosts {@amd} to h_vmem=100G
> > > }
> > >
> > > but It doesn't work! why?
> > Not sure exactly what doesn't work means and resource quotas aren't my
> > strong point(for my usage complex_values works better) but this is a
> > limit on a requested resource not on actual memory usage.  If the user
> > didn't request h_vmem then the quota wouldn't apply and no limit would
> > be enforced.
> > Might help to make h_vmem consumable as well.
> >
> > William
> >
> >
> > >
> > > Thx
> > >
> > >
> >
> > _______________________________________________
> > users mailing list
> > users at gridengine.org
> > https://gridengine.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gridengine.org/pipermail/users/attachments/20111120/0d753d68/attachment.html>


More information about the users mailing list