If you can, remove all Logical File Members and process the files. Are
you processing the Physical File in FIFO order? Are you updating key
fields on some files while not others? Is journaling running on some
and not others? Any triggers/constraints or other DB level control?

Chris Bipes
Director of Information Services
CrossCheck, Inc.

-----Original Message-----

original message:

We are updating a field (company code) in some very large physical
files.
These PFs are created by DDS, without journaling.
We use the similar RPG program (very simple, just read and update) on
them
and the system environment is exactly the same.
One batch job for each PF. Some jobs can update more than 5 millions
record
per hour, some can do 1 millions
per hour, while some can only do 100 thousands per hour. At first, I
think
the reason might be the number of dependent logical files.
But then I found a PF with 36 dependent LFs is 10 times faster than a PF
with 30 dependent LFs. Any clues for me?


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.