Comments inserted after inline snippets.

One case I've seen that causes the problem is when the original file
has a unique index. The old UNDEL2 copied the records to a file in
QTEMP using CPYF CRTFILE(*YES), which creates an indexed file. In
that case, CPYF sets fields in the deleted records in the QTEMP file
to default values (blanks/zeros). Don't ask me why :-)

The difference is what is described as 'fast copy' versus 'row copy' code paths in CPYF processing. I checked the old UNDEL2 source which does not specify any ERRLVL() parm on CPYF. Much like most coded command requests, the request is neither library qualified nor are all the necessary parameters specified to ensure all assumptions are met, to avoid being broken due to changed command defaults. The assumption here, is going to be that fast-copy is required, because if row-copy is used, the data from the delete row becomes defaults. To ensure fast copy code path is even possible, ERRLVL(0) is required. I believe the FMTOPT(*NONE) is also mandatory. Additionally for fast copy, the FROMFILE() must not be in use; i.e. before the copy, issue:
ALCOBJ ((fromlib/fromfile *file *excl frommbr))

The row copy is the same as all other DB I/O, regardless that it is being done in the CPYF utility. That is, a deleted record does not appear to exist, except as a direct/RRN read. And when a read of a relative record number of a deleted record is done, the buffer is just the defaults.
The fast copy on the other hand will copy a /chunk/ of dataspace data into the other dataspace. So just like save does not access row.data, nor does the fast copy. In both cases, whatever data was there in the original file, is copied to the target location; of course in CPYF, the selected rows.
I know off hand, just one unattractive manner to verify that fast-copy was used. I have used a breakpoint program for instruction '/1' for QDBFFCPY, to set an indicator to be tested after the CPYF completes.

> The new version uses CRTPF to create the file in QTEMP first, without
> an index, then CPYF's into it, which somehow induces the system not
> to set the fields to defaults.

A unique index causes the copy request to drop into the row copy mode, to enable the index on the tofile to diagnose any duplicates; irrespective of any apparent or perceived logic seeming to make that unnecessary. In so doing, ensures the copied data is valid in the new file according, to the unique index of the new file.

> Chuck Pence inferred that the problem might be because date, time, &
> timestamp fields are stored in the file in a different format from
> how they appear in a program's buffer (or in DSPPFM). UNDEL2 already
> handles that -- you can see the code in the source for UNDELM2,
> subroutine "reformat".

He-heh. Not so much my inference as it was a redirection, back to the "Speculation" that alluded the missing data occurred "whenever the file includes a data, time, or timestamp field but I'm not sure". It seemed important to redirect to anything other than the implication that "it appears the OS fails to copy the original raw data to the SAVF in some cases". Alluding the save may not be saving the exact image of the data on disk, would imply also, that a restored copy was not the same data as the original. I was quick to redirect anywhere but there, offering only how an issue with the date/time/timestamp might be a more likely origin for difficulties. :-) Of course in learning of the CPYF usage, it is now clear that the code path CPYF uses, determines if the undelete request functions as expected.

Regards, Chuck

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.