|
-- [ Picked text/plain from multipart/alternative ] Hello folks, Our product has a main file (with a unique key on the PF, plus a LOT of logical files) that can grow to pretty large record volume (25M to 200M records). We are needing to change the way we do a new release with a database change to that file. The question I have is this - what is the best way to update the database file definition and translate the data to the new definition for a large record count? CHGPF changes the file "serially" which is terribly slow. I have written a process that submits a given number of "parallel" CPYF commands, but it appears that the constraint is in building the UNIQUE access path of the PF. Once I get to 5 or 6 jobs for a 3.5M record test data set on our two-processor development box, I cease to get runtime improvement. It takes 1:20 plus/minus 3 minutes. (an hour and 20 minutes) I have removed all logical files so that the only index being built is the UNIQUE one on the physical file. I hope the question is clear. What do you folks think? Thanks in advance, Michael Polutta Atlanta, GA
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.