[gridengine users] h_vmem consumable and mmap() files
stuartb at 4gh.net
Fri Mar 9 14:50:55 UTC 2012
We are running into an awkward problem with one of our users jobs.
We have h_vmem set as a consumable resource and have it set to the
physical memory (minus a small amount) on the systems. These are
diskless systems and have no swap defined. This works well for most
of our user jobs in preventing over allocation of physical memory.
Our user job is using mmap(2) to map a large file into memory (it is
actually using the R bigmemory library). The entire file size gets
counted as virtual memory use even though only a small portion of the
file is actually accessed. This seems correct behavior since the
backing file is virtual memory.
The problem is that in reality the job is only reads selected portions
of the file and using just a small amount of physical memory.
However, the jobs must request a large h_vmem in order to run. This
prevents other jobs from using the available memory on the node.
A further complication is that this user is expecting this file to
eventually grow larger than physical memory and then the node h_vmem
will not be large enough for the job to even start.
Has anyone had to deal with this issue before?
Is there a better consumable to use instead of h_vmem? h_vmem is
seems to be the only resource which actually applies an enforced
memory limit to jobs.
I have suggested that our use revisit the use of bigmemory for this
specific case and instead just directly read the necessary information
from the file. This should address both the immediate problem and the
problem as the file size grows.
We do have other users who are also using mmap() for more practical
reasons so I expect that this problem may occur again for other users.
Thanks for any further suggestions,
I've never been lost; I was once bewildered for three days, but never lost!
-- Daniel Boone
More information about the users