Let's get back to the actual problem at hand (aside from general sluggishness):

Would anybody care to hazard a guess about how much time the following SHOULD take:

On a healthy 550, with 4516G in 53 drives, and a healthy amount of main storage, (WRKHDWRSC shows 4 4G main storage cards, but the box has 3 LPARs), a non-Cycle RPG program runs through a file of 132369 records.

For each of these records, if the record is of type "customer," and a corresponding record can be found in a 226368-record customer master file,
the program SETLLs to that customer's first record in a 2.2M record "A/R journal by customer number" logical file, then runs through all of that customer's records in that file. For each of these records (but only the customer's records, and remember, the customer number is the leading key),
the program adds several numbers into digest fields in our controlling file, then runs through the current invoice's detail records in a 14.8M record "billing detail" file, to see if there is anything that should be backed out of the digest fields.

After this, and before going on to the next customer, the program goes through the customer's records in two tracking files, one of them 2.2M records, 314 fields wide, the other 714k records, about 400 fields wide, both of them logicals in which the customer number is the leading key (with the 3 fields on which we're digesting the data as the next 3 keys), and uses the customer's records in these two files to build up records for that customer in a digest file, checking them against a logical of the same 14.8M record "billing detail" file, to determine whether to increment a counter.

While we are currently rebuilding that digest file from scratch every time, there is no evidence that we ever visit any of these 19M records more than once (or twice, in the case of any "billing detail" records that are checked for both the first major step and one of the two "digesting" steps). If I were to change the algorithm to avoid the scratch rebuilds, I might reduce the number of records visited by 75%.

At any rate, given the hardware specified, how long SHOULD a non-Cycle RPG program take to visit some 19M records in 7 (counting both the billing detail PF and the LF we're using) files, updating most of the records in the controlling file, and writing digest records to an 8th file?

--
JHHL

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.