So, did you first get the total number of records and then split the rrn
10 ways? Or did you just alot a certain number of records per instance
and when you had covered the last rrn you were done?
Also, why not drive the CPU hard? If there is any part of the system
that can handle a consistently high load it should be the cpu. Rev that
baby up. Idle cycles are lost cycles... can't ever get them back so you
might as well use them. It would be interesting to see how a nightly
batched MRP run would work if this approach was taken. Wonder if we could
take it from 2 hours to down to 20 minutes....
Thanks
Bryce Martin
Programmer/Analyst I
570-546-4777
"Lennon_s_j@xxxxxxxxxxx" <lennon_s_j@xxxxxxxxxxx>
Sent by: rpg400-l-bounces@xxxxxxxxxxxx
06/04/2010 06:29 PM
Please respond to
RPG programming on the IBM i / System i <rpg400-l@xxxxxxxxxxxx>
To
rpg400-l@xxxxxxxxxxxx
cc
Subject
Re: Speed in Reading
Yes, I've done that too with good results, but in my case I was reading
a significantly large transaction file and updating many summary files.
Splitting up the transaction file largely by RRN and processing
between 7 and 10 in parallel significantly reduced the elapsed time, but
did drive the CPU hard.
Sam
On 6/4/2010 5:54 PM, Peter Connell wrote:
Kurt,
While there are undoubtedly horses for courses I have been tasked with
data mining tasks over the last year where there are 10 million or more
records driving the process each of which itself may generate scores of
other reads. I've found that submitting up to simultaneous 10 jobs, each
of which accepts parameters as to which portion of the input file drives
it, has yielded exceptional performance. This does drive CPU right up but
permits huge volumes to be processed overnight.
Peter
As an Amazon Associate we earn from qualifying purchases.