[gridengine users] Doubts regarding the h_vmem allocation while submitting the job in parallel environment

Skylar Thompson skylar2 at u.washington.edu
Tue Apr 21 13:53:19 UTC 2015


Hi Sudha,

mem_free and h_vmem are standard consumable resources[1] so they scale by
slot. If you have a five-slot job that requests 1GB of address space (-l
h_vmem=1G) then each slot will get 1GB of allowed address space, for 5GB
total.

[1] You can check the consumable type with "qconf -sc" and look in the
consumable column. YES means requests scale by slot request, JOB means it
gets assigned to the entire job, and HOST to each node the job runs on.

On Tue, Apr 21, 2015 at 12:01:04PM +0000, sudha.penmetsa at wipro.com wrote:
> Hi,
> 
> Could you please explain how do we need allocate values for mem_free and h_vmem while submitting the job in parallel environment.
> 
> It would be helpful if you explain how should we calculate the value to be allocated to h_vmem according to the slots using  -pe parallelenvironmentname slots.
> 
> Regards,
> Sudha
> The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com

> _______________________________________________
> users mailing list
> users at gridengine.org
> https://gridengine.org/mailman/listinfo/users


-- 
-- Skylar Thompson (skylar2 at u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine



More information about the users mailing list