[gridengine users] Doubts regarding the h_vmem allocation

sudha.penmetsa at wipro.com sudha.penmetsa at wipro.com
Thu Apr 23 15:04:22 UTC 2015


Hi,

We can see the total h_vmem for a grid execution host using

qconf -se nodeA | grep h_vmem

Can you please help me in finding the available h_vmem on a grid execution host.

Regards,
Sudha

-----Original Message-----
From: users-bounces at gridengine.org [mailto:users-bounces at gridengine.org] On Behalf Of users-request at gridengine.org
Sent: Wednesday, April 22, 2015 5:30 PM
To: users at gridengine.org
Subject: users Digest, Vol 52, Issue 22

Send users mailing list submissions to
        users at gridengine.org

To subscribe or unsubscribe via the World Wide Web, visit
        https://gridengine.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
        users-request at gridengine.org

You can reach the person managing the list at
        users-owner at gridengine.org

When replying, please edit your Subject line so it is more specific than "Re: Contents of users digest..."


Today's Topics:

   1. Doubts regarding the h_vmem allocation while submitting the
      job in parallel environment (sudha.penmetsa at wipro.com)
   2. Re: Doubts regarding the h_vmem allocation while submitting
      the job in parallel environment (Skylar Thompson)


----------------------------------------------------------------------

Message: 1
Date: Tue, 21 Apr 2015 12:01:04 +0000
From: <sudha.penmetsa at wipro.com>
To: <users at gridengine.org>
Subject: [gridengine users] Doubts regarding the h_vmem allocation
        while submitting the job in parallel environment
Message-ID:
        <SIXPR03MB060709F0EE4C15D5645670D896EF0 at SIXPR03MB0607.apcprd03.prod.outlook.com>

Content-Type: text/plain; charset="us-ascii"

Hi,

Could you please explain how do we need allocate values for mem_free and h_vmem while submitting the job in parallel environment.

It would be helpful if you explain how should we calculate the value to be allocated to h_vmem according to the slots using  -pe parallelenvironmentname slots.

Regards,
Sudha
The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gridengine.org/pipermail/users/attachments/20150421/83af22ec/attachment-0001.html>

------------------------------

Message: 2
Date: Tue, 21 Apr 2015 06:53:19 -0700
From: Skylar Thompson <skylar2 at u.washington.edu>
To: users at gridengine.org
Subject: Re: [gridengine users] Doubts regarding the h_vmem allocation
        while submitting the job in parallel environment
Message-ID: <20150421135319.GA4909 at utumno.gs.washington.edu>
Content-Type: text/plain; charset=us-ascii

Hi Sudha,

mem_free and h_vmem are standard consumable resources[1] so they scale by slot. If you have a five-slot job that requests 1GB of address space (-l
h_vmem=1G) then each slot will get 1GB of allowed address space, for 5GB total.

[1] You can check the consumable type with "qconf -sc" and look in the consumable column. YES means requests scale by slot request, JOB means it gets assigned to the entire job, and HOST to each node the job runs on.

On Tue, Apr 21, 2015 at 12:01:04PM +0000, sudha.penmetsa at wipro.com wrote:
> Hi,
>
> Could you please explain how do we need allocate values for mem_free and h_vmem while submitting the job in parallel environment.
>
> It would be helpful if you explain how should we calculate the value to be allocated to h_vmem according to the slots using  -pe parallelenvironmentname slots.
>
> Regards,
> Sudha
> The information contained in this electronic message and any
> attachments to this message are intended for the exclusive use of the
> addressee(s) and may contain proprietary, confidential or privileged
> information. If you are not the intended recipient, you should not
> disseminate, distribute or copy this e-mail. Please notify the sender
> immediately and destroy all copies of this message and any
> attachments. WARNING: Computer viruses can be transmitted via email.
> The recipient should check this email and any attachments for the
> presence of viruses. The company accepts no liability for any damage
> caused by any virus transmitted by this email. www.wipro.com

> _______________________________________________
> users mailing list
> users at gridengine.org
> https://gridengine.org/mailman/listinfo/users


--
-- Skylar Thompson (skylar2 at u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine


------------------------------

_______________________________________________
users mailing list
users at gridengine.org
https://gridengine.org/mailman/listinfo/users


End of users Digest, Vol 52, Issue 22
*************************************
The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com




More information about the users mailing list