Well how to proceed requires knowledge on:
1. how the jobs are submitted.
2. what the jobs are doing.
3. application design/programming resources available.
We use prestart jobs to handle incoming request from windows servers
that process much faster than batch job that only process one
transaction. (Job create/start/cleanup can take much more CPU than
processing one transaction.) We also have data queues that other
windows servers submit to with multiple server jobs pulling the
transaction and processing. We look at the time on queue and compare to
current time, if on queue for more than x time, we start another server
processor. (Queue transactions do not necessarily need a response while
the prestart transactions need an real time response.)
Chris Bipes
Director of Information Services
CrossCheck, Inc.
-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx
[mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Wilt, Charles
Sent: Thursday, November 01, 2007 2:10 PM
To: Midrange Systems Technical Discussion
Subject: RE: Quantify processing wasted by lots of little jobs
<snip>
Probably the question you want to ask is how long are users
waiting and are other things not getting done?
Well all the jobs are submitted to a single threaded queue....(single
threaded due to a design
limitation ).
Right now, there's 1800+ jobs waiting in the queue to run. CPU usage at
100%.
Seems like the CPU time spent creating the separate jobs would be better
used running the process
needed.
Ideally, I'd like to have the process using data queues and multiple
NEP's to handle the processing.
But it'd be a bigger job to allow multiple process to run at the same
time if it's even possible (the
limitation is not an iSeries application design issue, an AIX box is
involved).
I seem to remember an article with a example program that used data
queues and automatically increased
or decreased the number of NEP jobs running. Anybody got a link to
that?
As an Amazon Associate we earn from qualifying purchases.