We have a job that writes about 1.5B records to a PF. When the job starts,
it does a CLRPFM and then starts dumping records into the PF.



We recently added an LF over the PF with no mods to the original program (LF
will be used later in the stream by something else) and the job went from
approx. 2 hrs to an estimated 11 hrs. We killed it after the normal 2 hour
window with less than 20% of the records written.



While trying to determine why, one thing that we noticed is the job I/O
count was different with and without the LF. We found that without the LF,
the output blocking factor is a pretty decent number of records (I/O count
is WAY less than RRN). However, when we created the LF, the I/O count and
the RRN are almost identical (like blocking is ignored). The LF is not
unique keyed, just a normal keyed logical.



I've never noticed this before but I've never really paid attention to the
I/O count on output files. Is this a normal thing for an LF to eliminate
the blocking factor of the output and make it 1 for 1 or is there something
that we muffed up in the create of the LF?



I know we can change the LF to *DLY and stuff like that but we just want to
understand what happened and why so we know for future reference.



Thanks.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.