|
There is absolutely no down side to reuse deleted records. The performance issues are very close to zero. All applications that I have written for the past 13 years have all files set to reuse deleted records. It certainly eliminates the downtime problems associated with file reorgs. If you must guarantee that records are processed in write sequence, then you must add a key, perhaps timestamp. Obviously the relative record number will no longer indicate the sequence the records were written to the file. If you want to go to the effort of using a data queue, that certainly could also be a way to go. I have also written many data queue routines for high performance requirements. Don Tully Tully Consulting LLC -----Original Message----- From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of James H H Lampert Sent: Monday, September 25, 2006 5:24 PM To: midrange-l@xxxxxxxxxxxx Subject: File that has records constantly being added and deleted Here's the situation: We have a file. Any arbitrary number of jobs can put records into the file; a single dedicated job reads the records, in arrival sequence, processes them, and deletes them. We thus have a file that rarely has more than a few active records, but accumulates lots and lots of deleted ones. Is there a way to squeeze out deleted records without having to grab an exclusive lock on the file? Or would it be more sensible to set it to re-use deleted records, and modify the processing program to read by key? Or are there other ideas? -- JHHL
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact copyright@midrange.com.
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.