I am in complete agreement with Vern here. I would dearly love to see documentation of how using DDL would improve the situation at all. Because if it did (we can hope!) it would provide yet another, and important, reason to dump DDS for DDL!!!

- DrF

On 4/23/2022 8:41 AM, Vern Hamberg via MIDRANGE-L wrote:
Hi Vance

Now _that_ is a statement that begs for more information! Can you give more detail of the benefit of going with DDL. If you are thinking of what happens when updating a PF with LFs, I believe that indexes over the tables would have the same issue of updating them when changing so many records.

Cheers
Vern

On 4/23/2022 7:35 AM, Vance Stanley via MIDRANGE-L wrote:
  Maybe time to switch to ddl on that table.
     On Friday, April 22, 2022, 02:23:00 PM CDT, x y <xy6581@xxxxxxxxx> wrote:
  IBM"s advice in the past (IIRC) was to remove the LF members if you're
changing more than ~15% of the records.  You can automate this with a
quick-and-dirty CL program: DSPDBR to an outfile, read the file, remove the
members, maintain the PF, read the DSPDBR OUTFILE, add the members back.

On Fri, Apr 22, 2022 at 11:18 AM <smith5646midrange@xxxxxxxxx> wrote:

We have a job that writes about 1.5B records to a PF.  When the job starts,
it does a CLRPFM and then starts dumping records into the PF.



We recently added an LF over the PF with no mods to the original program
(LF
will be used later in the stream by something else) and the job went from
approx. 2 hrs to an estimated 11 hrs.  We killed it after the normal 2 hour
window with less than 20% of the records written.



While trying to determine why, one thing that we noticed is the job I/O
count was different with and without the LF.  We found that without the LF,
the output blocking factor is a pretty decent number of records (I/O count
is WAY less than RRN).  However, when we created the LF, the I/O count and
the RRN are almost identical (like blocking is ignored).  The LF is not
unique keyed, just a normal keyed logical.



I've never noticed this before but I've never really paid attention to the
I/O count on output files.  Is this a normal thing for an LF to eliminate
the blocking factor of the output and make it 1 for 1 or is there something
that we muffed up in the create of the LF?



I know we can change the LF to *DLY and stuff like that but we just want to
understand what happened and why so we know for future reference.



Thanks.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com




As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.