We have a file that has an enormous number of records in (just over 1.9
billion) and want to clear it down a bit to reclaim some much needed
DASD. The trouble is that because of the file size our standard purge
routine takes longer than the available 17 hour window that we have for
running housekeeping tasks.

At the moment we think we have two choices available to us:

1. Use SQL DELETE to remove all the records we want to purge and then
run a RGZPFM over the file

2. Restore a copy of the file from tape to a second server, and use CPYF
to create a new version of the physical with only those records we want
to retain and then restore that over the original. We think this will
automatically perform the RGZPFM as part of the restore/copy.

Does anybody have any experience or advice on performing this sort of
"purge", ideas on how to improve performance, etc, etc.

We're running V5R4 and there are eight logicals over the file.

Thanks

Jonathan








_______________________________________________________
This message was sent using NOCC v1.14 webmail software
_______________________________________________________





As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.