On 8/19/2015 1:03 PM, Charles Wilt wrote:
On Wed, Aug 19, 2015 at 11:28 AM, <rob@xxxxxxxxx> wrote:
Isn't there also some limit on how big a commitment chunk (?boundary?) can
be?
I'm not the Chuck you're expecting... :)
But my understanding is that while there's no hard limit other than
memory/disk...
there's a performance hit the larger you go.
From
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_72/rzakj/rzakjcommitbatch.htm?lang=en
Commitment control for batch applications
A batch job can lock a maximum of 500 000 000 records. You can reduce
this limit by using a Query Options File (QAQQINI). Use the QRYOPTLIB
parameter of the Change Query Attributes (CHGQRYA) command to specify a
Query Options File for a job to use. Use the
COMMITMENT_CONTROL_LOCK_LEVEL value in the Query Options File as the
lock limit for the job. The lock limit value is cached internally for
each commitment definition the first time a journaled resource is placed
under commitment control. If the lock limit is changed after that point,
the cached value must be refreshed for it to become effective for that
commitment definition. Any call to the Retrieve Commit Information
(QTNRCMTI) API refreshes the cached value in the calling job. The new
value will not apply to transactions that started before the cache is
refreshed.
Any commit cycle that exceeds 2000 locks probably slows down system
performance noticeably. Otherwise, the same locking considerations exist
as for interactive applications, but the length of time records are
locked in a batch application might be less important than in
interactive applications.
As an Amazon Associate we earn from qualifying purchases.