Mark, thank you for your response.
I've examined the journal receivers and I'm not seeing large numbers of
(unexpected) deleted records showing up. In addition, both programmers
working on the app in question were my customers or my employees, are now
retired, and (faint praise here) they both adopted and borrowed existing
programs I'd written to create new ones. I've used commitment control
since it was announced and our subfile maintenance programs have the
records locked via CC.

Deleting a record so no other user can update it is a cleverly abusive use
of technology, kind of like hearing a 3 y.o. kiddo innocently blurt out
something inappropriate and hilarious while the parents try not to bust a
gut laughing. Deleting to protect from update: it's a S/36-y technique and
I am proud to say I went S/3, S/32, S/34, S/38.

I have another very active table (created in 2001) with 15,000,000 records
and I see the record count increasing and the number of deleted records
moving up and down; right now it's at 0 deletes. It's working as expected.

So, the evidence points to a purge process not followed up by a RGZPFM.
The app's been running for more than 20 years, the tables in question are
very active, no other active tables appear to have large numbers of deleted
records, the active record and deleted record counts are moving up and down
(as expected), and the data's value has a very short half-life (planning
same-day pickups and next-day deliveries). I haven't been able to find a
purge program but I'm looking and assuming there is one somewhere. When
you do a monster delete, I think RGZPFM is appropriate and I think this
delete must have been for 10 years of data. If your data purge routines
run monthly, RGZPFM probably isn't worth the time.

Why the fuss? Primarily to manage backups--I don't want to take tape
library space on deleted records. A secondary reason is for my peace of
mind--if something's off the rails, I need to fix it.

My app has tens of millions of date-sensitive data (effective and
expiration dates) and I update the expiration date to remove it from
default views. If you go look at a customer's pricing data, you'll see the
current data by default, with expired and future (not yet effective)
pricing available via command key.


<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Virus-free.www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

On Thu, Jan 29, 2026 at 6:41 PM Mark Waterbury via MIDRANGE-L <
midrange-l@xxxxxxxxxxxxxxxxxx> wrote:

Hello, Reeve,

Here is a scenario I can recount from an actual customer site -- I was
on-site to do some training, early this century, and one day, in the middle
of the week, many of the IT staff were in a panic because one critical file
in their application could no longer accept any new records (inserts).

The physical file was originally created before the arrival of the
relatively new REUSEDLT(*YES) feature. I looked at the file details with
DSPFD and we discovered that this file, which had reached the maximum
size of 4 billion records, (a 4-byte 32-bit unsigned binary integer number
tracks the # of records in each member) was actually 90% comprised of
deleted records. :-o

Since no one had noticed, and the file had grown larger and larger, over
time, it was now so large that a RGZPFM would take longer than a 3-day
week-end. This was before the arrival of the "Reorganize while active"
feature that allowed to stop a re-org and then restart it again later.

Fortunately, we were able to change the file with CHGPF to specify the
then relatively new REUSEDLT(*YES), and fairly quickly, the applications
that were writing to or updating records in that file began working once
again.

However, the file still showed this amazing number of deleted records.

It turned out that a very bad application design is what led to this
problem. The application would select some subset of records from the
file, load them into a subfile, and then it would DELETE those records from
the database, to prevent any other users from getting in there and updating
those records while they were "in use" and "under review" in the subfile.
Then, the user could page up and down, and change any desired records in
the subfile. Finally, when the user pressed F3=Exit, the application would
then write out ALL of the subfile records back to the database once again.

This application should have been using normal record locking and
commitment control. But, like I said, it was poorly designed, perhaps a
carry-over from an old S/36 design. And, back then, none of their
database files were even journaled. :-/ One can imaging the problems if
the system "crashed" (power loss, etc.) while a user had a few thousand
records loaded into the subfile, but now deleted from the physical file,
and had not yet pressed F3=Exit.

Anyway, my point is, your situation could be similar. That file may have
just accumulated all of those deleted records over many years. And,
without that REUSEDLT(*YES), the file will just grow bigger and bigger,
until it "hits the wall" as described above. Changing the file to
REUSEDLT(*YES) will not magically clean up all of those deleted records.
You would need to use the
newer Reorg-while-active features of RGZPFM, to clean out those deleted
records.

I hope this sheds some light on what may be happening and why.

All the best,

Mark S. Waterbury




On Thursday, January 29, 2026 at 08:59:55 PM EST, x y <xy6581@xxxxxxxxx>
wrote:

My thanks to those who responded.

I understand the "old" reorg requires exclusive access; I use
reorg-while-active for smaller files with a small number of deletes but
these files are usually low-volatility master files where the net deletes
are greater than the net inserts; hence, deleted record count > 0.

This is not an application issue--I know the business and I know the
application. At 2,000 orders a day and 8,800,000 deleted records, each
order would have to insert and delete 4,400 records...and for that volume,
my journal receivers would be hundreds of gigabytes. Running a receiver
audit halfway through the workday shows about 1,000 inserts (PX's) so
delete space is being reused.

There is one more possibility: the previous developers purged years of old
data from the files but failed to reorg to remove the dead space. I'd be
surprised (given the speed at which a large file can reorg and knowing the
previous admins were very careful) if that were the case but it may be the
Occam's Razor answer. I will send the email.

-rf


On Thu, Jan 29, 2026 at 2:10 AM Birgitta Hauser <Hauser@xxxxxxxxxxxxxxx>
wrote:

The first thing I'd do is to reorganize these files.
BTW it is also possible to interrupt an RGZPFM ... and restart (continue)
the reorganization later.
After the tables have been reorganized, I would watch how the total
number
of records progresses in relation to the deleted records.

BTW deleted rows are also bad for unkeyed reading (with native I/O) or a
Table Scan in SQL, because all rows (including the deleted ones) are
read.


Mit freundlichen Grüßen / Best regards

Birgitta Hauser
Modernization – Education – Consulting on IBM i
Database and Software Architect
IBM Champion since 2020

"Shoot for the moon, even if you miss, you'll land among the stars." (Les
Brown)
"If you think education is expensive, try ignorance." (Derek Bok)
"What is worse than training your staff and losing them? Not training
them
and keeping them!"
"Train people well enough so they can leave, treat them well enough so
they
don't want to. " (Richard Branson)
"Learning is experience … everything else is only information!" (Albert
Einstein)


-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxxxxxxxx> On Behalf Of
Reeve
Sent: Thursday, 29 January 2026 08:43
To: midrange-l@xxxxxxxxxxxxxxxxxx
Subject: REUSEDLT not reusing

I'm looking at a dozen reasonably busy files with REUSEDLT(*YES) and
having
a large number of deleted records. One example: ~780,000 records,
~8,100,000 deleted records, record length 110 bytes, no extraordinary
data
structures or attributes. The files in question are journaled with
"after"
images and all were created in March 2000 (almost 26 years ago--hard to
believe!).

There is no app that deletes that many records in one shot. I'm ruling
out
the possibility the app fired up with 8,800,000 deleted records. I don't
remember extactly when this feature came out but I do have a faint
recollection of having to recompile my PF's to make REUSEDLT work
properly.
Or is my memory what's not working properly?

My plan: CHGPF to cycle REUSEDLT off/on, RGZPFM, and watch the number of
deletes for the next week. Next step: CHGPF with the source member.

I'm grateful for any advice.

--rf

<

https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campai
gn=sig-email&utm_content=webmail
<
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail


Virus-free.www.avast.com
<

https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campai
gn=sig-email&utm_content=webmail
<
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail


<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at
https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2026 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.