[gridengine users] User job fails silently

Derrick Lin klin938 at gmail.com
Wed Aug 8 23:26:34 UTC 2018


>  What state of the job you see in this line? Is it just hanging there and
doing nothing? They do not appear in `top`? And it never vanishes
automatically but you have to kill the job by hand?

Sorry for the confusion. The job state is "r" according to SGE, but as you
mentioned qstat output is not related to any process.

The line I coped is what it shown in top/htop. So basically, all his jobs
became:

`- -bash /opt/gridengine/default/spool/omega-6-20/job_scripts/1187671
`- -bash /opt/gridengine/default/spool/omega-6-20/job_scripts/1187677
`- -bash /opt/gridengine/default/spool/omega-6-20/job_scripts/1187690

Each of this scripts does copy & untar a file to the local XFS file system,
then a python script is called to operate on these untared files.

The job log shows that untaring is done, but the python script has never
started and the job process stuck as shown above.

We don't see any storage related contention.

I am more interested in knowing where this process
bash /opt/gridengine/default/spool/omega-6-20/job_scripts/1187671 come from?

Cheers,


On Wed, Aug 8, 2018 at 6:53 PM, Reuti <reuti at staff.uni-marburg.de> wrote:

>
> > Am 08.08.2018 um 08:15 schrieb Derrick Lin <klin938 at gmail.com>:
> >
> > Hi guys,
> >
> > I have a user reported his jobs stuck running for much longer than usual.
> >
> > So I go to the exec host, check the process and all processes owned by
> that user look like:
> >
> > `- -bash /opt/gridengine/default/spool/omega-6-20/job_scripts/1187671
>
> What state of the job you see in this line? Is it just hanging there and
> doing nothing? They do not appear in `top`? And it never vanishes
> automatically but you have to kill the job by hand?
>
>
> > In qstat, it still shows job is in running state.
>
> The `qstat`output is not really related to any running process. It's just
> what SGE granted and think it is running or granted to run. Especially with
> parallel jobs across nodes, the might or might not be any process on one of
> the granted slave nodes.
>
>
> > The user resubmitted the jobs and they ran and completed without an
> problem.
>
> Could it be a race condition with the shared file system?
>
> -- Reuti
>
>
> > I am wondering what may has caused this situation in general?
> >
> > Cheers,
> > Derrick
> > _______________________________________________
> > users mailing list
> > users at gridengine.org
> > https://gridengine.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gridengine.org/pipermail/users/attachments/20180809/411e848d/attachment.html>


More information about the users mailing list