Thanks Vern,
This is exactly what I was thinking but didn't have the time or knowledge or demonstrate!
- DrF

On 4/23/2022 5:47 PM, Vern Hamberg via MIDRANGE-L wrote:
Maybe I can offer some thoughts. Objects created using DDS source and using SQL DDL are all the same kinds of objects, at least in one way of looking at things. SQL tables and physical files are both the same low-level MI object types. Here is something from running DMPOBJ over a PF -
OBJ- QAUOOPT                         CONTEXT- VHAMBERG
OBJTYPE- *FILE
OBJECT TYPE- SPACE                                           *FILE
NAME-        QAUOOPT                         TYPE-          19 SUBTYPE-          01

and the same information for an SQL table -
OBJ- NEWTABLE                        CONTEXT- VHAMBERG
OBJTYPE- *FILE
OBJECT TYPE- SPACE                                           *FILE
NAME-        NEWTABLE                        TYPE-          19 SUBTYPE-          01
And if you go further in the SPLF, there are *MEM (member) and *FMT (format) and other kinds of low-level objects. So a table is still a physical file with the difference in some attributes. Use DSPFD on either, and you'll see the _SQL type_ attribute in a table and not in a PF.

Logical files are also spaces, type 19 subtype 01, so there are other attributes that distinguish them from physical files, etc.

That is probably FTMI - far-too-much-information. You can see lists of these codes in Knowledge Center, search for "internal object types" or "external object types".

As to replacing LFs with views - a view is a kind of LF - all it contains is the SELECT statement that is run to get the data. This is something like an LF that does not have a key.

An LF with a key cannot be replaced by a view, since the latter don't have key fields. An index, though, does have a key - but until fairly recently, it did not have a subset of the fields in the PF. But now IBM have let us specify a column list in the CREATE INDEX statement, so this is more like the traditional LF with a key and a subset of the fields.

Anyhow, this is now DTMI - definitely TMI!

Regards
Vern


On 4/23/2022 11:23 AM, Vance Stanley via MIDRANGE-L wrote:
  I might add my understanding is limited on the differences between ddl and dds. However, One of the benefits might be to Identify logicals that might be used less frequently and with indexes that are not frequently utilized. Those might be canditates for views.  That would be one step in reducing overhead in a large physical file.
     On Saturday, April 23, 2022, 10:49:31 AM CDT, Vance Stanley via MIDRANGE-L <midrange-l@xxxxxxxxxxxxxxxxxx> wrote:
   We experienced better performance updating a GL history file which had well over a billion records. That was with an older OS. I cant remember which version.
     On Saturday, April 23, 2022, 07:41:33 AM CDT, Vern Hamberg via MIDRANGE-L <midrange-l@xxxxxxxxxxxxxxxxxx> wrote:
  Hi Vance

Now _that_ is a statement that begs for more information! Can you give
more detail of the benefit of going with DDL. If you are thinking of
what happens when updating a PF with LFs, I believe that indexes over
the tables would have the same issue of updating them when changing so
many records.

Cheers
Vern

On 4/23/2022 7:35 AM, Vance Stanley via MIDRANGE-L wrote:
   Maybe time to switch to ddl on that table.
       On Friday, April 22, 2022, 02:23:00 PM CDT, x y <xy6581@xxxxxxxxx> wrote:
   IBM"s advice in the past (IIRC) was to remove the LF members if you're
changing more than ~15% of the records.  You can automate this with a
quick-and-dirty CL program: DSPDBR to an outfile, read the file, remove the
members, maintain the PF, read the DSPDBR OUTFILE, add the members back.

On Fri, Apr 22, 2022 at 11:18 AM <smith5646midrange@xxxxxxxxx> wrote:

We have a job that writes about 1.5B records to a PF.  When the job starts,
it does a CLRPFM and then starts dumping records into the PF.



We recently added an LF over the PF with no mods to the original program
(LF
will be used later in the stream by something else) and the job went from
approx. 2 hrs to an estimated 11 hrs.  We killed it after the normal 2 hour
window with less than 20% of the records written.



While trying to determine why, one thing that we noticed is the job I/O
count was different with and without the LF.  We found that without the LF,
the output blocking factor is a pretty decent number of records (I/O count
is WAY less than RRN).  However, when we created the LF, the I/O count and
the RRN are almost identical (like blocking is ignored).  The LF is not
unique keyed, just a normal keyed logical.



I've never noticed this before but I've never really paid attention to the
I/O count on output files.  Is this a normal thing for an LF to eliminate
the blocking factor of the output and make it 1 for 1 or is there something
that we muffed up in the create of the LF?



I know we can change the LF to *DLY and stuff like that but we just want to
understand what happened and why so we know for future reference.



Thanks.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com




As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.