IMO, It depends on what works for you. We have a set of conversion files
for an insurance application which is being migrated away from the iSeries.
Many of them end up containing millions of records at the end of a
conversion run.
There is a significant amount of system overhead involved in performing file
extents, so I reviewed the number of records in each file after the most
recent conversion and did a CHGPF FILE(XXXX) ALLOCATE(*YES) on each and
sized them such that the initial allocation [ SIZE() ] is the number of
current records + ~ 10 %.
Example:
The address file had 450,000 records, so I changed it like this:
CHGPF FILE(ADDRESS) ALLOCATE(*YES) SIZE(500000 10000 1000)
These files all get either reorg'ed or cleared (or both) before each
conversion run. [This activates the space allocation...]
After making these changes, one job which previously took 9 hours to run
took around 7 hours, and the only real variable which changed was the file
allocation changes.
- sjl
Rob wrote:
Interesting on the allocated size. Why would one use that - to ensure
that you had the space available when you ran the process that goes to
maximum size? Or, you just knew you were going to grow to that and you
wanted to avoid extents? I guess IBM gives us the tools to do what we
want and if we misuse them then that's our choice. Preallocating space
for maximum size on all tables when they're never ALL going to grow to
maximum size is probably a real disk chomper.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.