[gridengine users] Hardware thoughts?

Tina Friedrich Tina.Friedrich at diamond.ac.uk
Wed Jul 20 15:39:25 UTC 2016


Hi Biggles,

I have about 200 nodes to my main cluster - I currently use a machine 
with 12 cores and 12G of RAM as qmaster, and that is frankly very bored. 
My old qmaster (for the same cluster) was, I believe, something with 4 
cores and something like 4G of RAM (might have been 8G) and that coped 
no problem.

I have a second cluster with about 70 nodes where I run the qmaster on a 
virtual machine with 8 cores and 8G of RAM and that's likewise bored.

These two qmasters are shadow hosts for each other and I've run both on 
the virtual machine for some time and that didn't seem to tax it much.

So I'd be tempted to agree that the qmaster doesn't need to be a very 
high spec machine :)

I don't have a login node for the cluster, so can't comment on that. Nor 
do I have a storage server specific to the cluster. We do have a couple 
of general purpose login nodes, which are quite high spec machines, but 
they are for people to run desktop sessions & data processing on, so...

Tina

On 20/07/16 15:56, Notorious Biggles wrote:
> Hi all,
>
> I have some money available to replace the infrastructure nodes of one
> of my company's grid engine clusters and I wanted a sanity check before
> I order anything new.
>
> Initially we contacted the company we originally bought the cluster from
> and they quoted us for a combined login/storage/master node with loads
> of everything and a hefty price tag. I feel an aversion to combining
> login nodes with storage and master nodes - we already have that on one
> of the clusters and a user being able to crash the entire cluster seems
> a bad thing to me and it happened often enough.
>
> I read Rayson's blog post about scaling grid engine to 10k nodes at
> http://blogs.scalablelogic.com/2012/11/running-10000-node-grid-engine-cluster.html
> and it seems that 4 cores and 1 GB of memory is more than enough to run
> a grid engine master. Given that I'd be lucky to have 100 nodes to a
> master, can anybody see a reason to spec a high powered master node? I
> look at my existing master nodes with 8+ cores and 24+ GB of memory and
> in Ganglia all I see is acres of green from memory being used as cache
> and buffers. It seems rather a waste.
>
> The other thing I was curious about is what kind of spec seems
> reasonable to you for a login node. My one cluster with separate login
> nodes has similar specs to the master nodes - 8 cores, 24 GB memory and
> it seems wasted. I can see an argument for these nodes to be more than
> just a low end box, especially if anybody is trying to do some kind of
> visualization on them, but I've never had complaints about them being
> under-powered yet.
>
> Any thoughts you might have are appreciated.
>
> Thanks
> Biggles
>
>
> _______________________________________________
> users mailing list
> users at gridengine.org
> https://gridengine.org/mailman/listinfo/users
>


-- 
Tina Friedrich, Computer Systems Administrator, Diamond Light Source Ltd
Diamond House, Harwell Science and Innovation Campus - 01235 77 8442

-- 
This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd. 
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom



More information about the users mailing list