On 9/25/06, James H H Lampert <jamesl@xxxxxxxxxxx> wrote:

Here's the situation:

We have a file. Any arbitrary number of jobs can put
records into the file; a single dedicated job reads the
records, in arrival sequence, processes them, and deletes
them. We thus have a file that rarely has more than a few
active records, but accumulates lots and lots of deleted
ones.

Is there a way to squeeze out deleted records without
having to grab an exclusive lock on the file? Or would it
be more sensible to set it to re-use deleted records, and
modify the processing program to read by key? Or are there
other ideas?


I would do one of the following, in order of preference ...

1 - use a data queue, although I would consider having a "companion" PF for
audit trail if the application warrants.  Data queues are excellent options
for this kind of "asynchronous" program-to-program communication.

2 - Set the file to re-use deleted records.  I don't know if you can rely on
arrival sequence to be accurate with deleted record re-use active.  If not,
I would add a date/time field and make it the primary key of the physical
file.  At worst, you will need to recompile the "receiver" program, or set
the file to not level check.

3 - Find some way to engineer a stoppage of the application to do a CLRPFM
or RGZPFM on a regular basis.

Good luck ...


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.