Do each of the other jobs have the file open all the time?  If not, your
processing program could, at some predetermined (quiet) time or at
intervals, attempt a reorg.  I would think the wait time would be
minimal and (hopefully) the other jobs wouldn't be impacted.

In a perfect world, I'd vote for the data queue too.  But, on the
assumption it's too difficult to re-engineer, I'd try reusing deleted
records.  Your processing program will likely have to read from the
beginning of the file until eof() each time.  Not sure I understand why
you think it needs to be keyed.  



jamesl@xxxxxxxxxxx 09/25/2006 3:24:20 PM >>>
Here's the situation:

We have a file. Any arbitrary number of jobs can put 
records into the file; a single dedicated job reads the 
records, in arrival sequence, processes them, and deletes 
them. We thus have a file that rarely has more than a few 
active records, but accumulates lots and lots of deleted 
ones.

Is there a way to squeeze out deleted records without 
having to grab an exclusive lock on the file? Or would it 
be more sensible to set it to re-use deleted records, and 
modify the processing program to read by key? Or are there 
other ideas?

--
JHHL

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.