[gridengine users] Split process between multiple nodes.

Guillermo Marco Puche guillermo.marco at sistemasgenomicos.com
Thu Oct 25 17:36:25 UTC 2012


Hello,

I've no idea who compiled the application. I just found on seqanswers 
forum that pBWA was a nice speed up to the original BWA since it 
supports native OPEN MPI.

As you told me i'll look further on how to compile open-mpi with SGE. If 
anyone knows a good introduction/tutorial to this would be appreciated.

Then i'll try to run it with my current version of open-mpi and update 
if needed.

Thanks.

Best regards,
Guillermo.

El 25/10/2012 18:53, Reuti escribió:
> Please keep the list posted, so that others can participate on the discussion. I'm not aware of this application, but maybe someone else is on the list who could be of broader help.
>
> Again: who compiled the application, as I can see only the source at the site you posted?
>
> -- Reuti
>
>
> Am 25.10.2012 um 13:23 schrieb Guillermo Marco Puche:
>
>> $ ompi_info | grep grid
>>
>> Returns nothing. Like i said I'm newbie to MPI.
>> I didn't know that I had to compile anything. I've Rocks installation out of the box.
>> So MPI is installed but nothing more I guess.
>>
>> I've found an old thread in Rocks discuss list:
>> https://lists.sdsc.edu/pipermail/npaci-rocks-discussion/2012-April/057303.html
>>
>>
>> User asking is using this script:
>>
>>   *#$ -S /bin/bash*
>>
>>   *#*
>>
>>   *#*
>>
>>   *# Export all environment variables*
>>
>>   *#$ -V*
>>
>>   *# specify the PE and core #*
>>
>>   *#$ -pe mpi 128*
>>
>>   *# Customize job name*
>>
>>   *#$ -N job_hpl_2.0*
>>
>>   *# Use current working directory*
>>
>>   *#$ -cwd*
>>
>>   *# Join stdout and stder into one file*
>>
>>   *#$ -j y*
>>
>>   *# The mpirun command; note the lack of host names as SGE will provide them
>>
>>   on-the-fly.*
>>
>>   *mpirun -np $NSLOTS ./xhpl >> xhpl.out*
>>
>>
>>
>> But then I read this:
>>
>>
>> in rocks  sge PE
>> mpi is loosely integrated
>> mpich and orte are tightly integrated
>> qsub require args are different for mpi mpich with orte
>>
>> mpi and mpich need machinefile
>>
>> by default
>> mpi, mpich are for mpich2
>> orte is for openmpi
>> regards
>> -LT
>>
>>
>> The program I need to run is pBWA:
>>   http://pbwa.sourceforge.net/
>>
>> It uses MPI.
>>
>> At this moment i'm kinda confused on which is the next step.
>>
>> I thought i just could run with MPI and a simple SGE job pBWA with multiple processes.
>>
>> Regards,
>> Guillermo.
>>
>>
>> El 25/10/2012 13:17, Reuti escribió:
>>> Am 25.10.2012 um 13:11 schrieb Guillermo Marco Puche:
>>>
>>>
>>>> Hello Reuti,
>>>>
>>>> I got stoned here. I've no idea what MPI library I've got. I'm using Rocks Cluster Viper 5.4.3 which comes out with Centos 5.6, SGE, SPM, OPEN MPI and MPI.
>>>>
>>>> How can i check which library i got installed?
>>>>
>>>> I found this:
>>>>
>>>> $ mpirun -V
>>>> mpirun (Open MPI) 1.4.3
>>>>
>>>> Report bugs to
>>>>
>>>> http://www.open-mpi.org/community/help/
>>> Good, and this one you also used to compile the application?
>>>
>>> The check whether Open MPI was build with SGE support:
>>>
>>> $ ompi_info | grep grid
>>>                   MCA ras: gridengine (MCA v2.0, API v2.0, Component v1.6.2)
>>>
>>> -- Reuti
>>>
>>>
>>>
>>>> Thanks,
>>>>
>>>> Best regards,
>>>> Guillermo.
>>>>
>>>> El 25/10/2012 13:05, Reuti escribió:
>>>>
>>>>> Am 25.10.2012 um 10:37 schrieb Guillermo Marco Puche:
>>>>>
>>>>>
>>>>>
>>>>>> Hello !
>>>>>>
>>>>>> I found a new version of my tool which supports multi-threading but also MPI or OPENMPI for more additional processes.
>>>>>>
>>>>>> I'm kinda new to MPI with SGE. What would be the good command for qsub or config inside a job file to ask SGE to work with 2 MPI processes?
>>>>>>
>>>>>> Will the following code work in a SGE job file?
>>>>>>
>>>>>> #$ -pe mpi 2
>>>>>>
>>>>>> That's supposed to make job work with 2 processes instead of 1.
>>>>>>
>>>>>>
>>>>> Not out of the box: it will grant 2 slots for the job according to the allocation rules of the PE. But how to start your application in the jobscript inside the granted allocation is up to you. Fortunately the MPI libraries got an (almost) automatic integration into queuing systems nowadays without further user intervention.
>>>>>
>>>>> Which MPI library do you use when you compile your application of the mentioned ones above?
>>>>>
>>>>> -- Reuti
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Regards,
>>>>>> Guillermo.
>>>>>>
>>>>>> El 22/10/2012 17:19, Reuti escribió:
>>>>>>
>>>>>>
>>>>>>> Am 22.10.2012 um 16:31 schrieb Guillermo Marco Puche:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> I'm using a program where I can specify the number of threads I want to use.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>> Only threads and not additional processes? Then you are limited to one node, unless you add something like http://www.kerrighed.org/wiki/index.php/Main_Page or http://www.scalemp.com
>>>>>>>
>>>>>>>
>>>>>>>   to get a cluster wide unique process and memory space.
>>>>>>>
>>>>>>> -- Reuti
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> I'm able to launch multiple instances of that tool in separate nodes.
>>>>>>>> For example: job_process_00 in compute-0-0, job_process_01 in compute-1 etc.. each job is calling that program which splits up in 8 threads (each of my nodes has 8 CPUs).
>>>>>>>>
>>>>>>>> When i setup 16 threads i can't split 8 threads per node. So I would like to split them between 2 compute nodes.
>>>>>>>>
>>>>>>>> Currently I've 4 compute nodes and i would like to speed up the process setting 16 threads of my program splitting between more than one compute node. At this moment I'm stuck using only 1 compute node per process with 8 threads.
>>>>>>>>
>>>>>>>> Thank you !
>>>>>>>>
>>>>>>>> Best regards,
>>>>>>>> Guillermo.
>>>>>>>> _______________________________________________
>>>>>>>> users mailing list
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> users at gridengine.org
>>>>>>>> https://gridengine.org/mailman/listinfo/users
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>>
>>>>>>
>>>>>> users at gridengine.org
>>>>>> https://gridengine.org/mailman/listinfo/users




More information about the users mailing list