On 10 Apr 2013 10:22, Anderson, Kurt wrote:
I think I found my answer in
http://publib.boulder.ibm.com/infocenter/iseries/v5r4/topic/rzajq/rzajq.pdf
Under "Control database manager blocking"
"The SQL run-time automatically blocks rows with the database
manager in the following cases:
INSERT
If an INSERT statement contains a select-statement, inserted rows
are blocked and not actually inserted into the target table until
the block is full. The SQL run-time automatically does blocking
for blocked inserts.
Note: If an INSERT with a VALUES clause is specified, the SQL
run-time might not actually close the internal cursor that is used to
perform the inserts until the program ends. If the same INSERT"
So it sounds like in my case, where I'd be doing inserts using
Values, that blocking wouldn't really be occurring. I'm not sure
how to interpret the bit about the internal cursor.
Basically a cursor is an ODP. An ODP acts as the buffering location
for the rows. If an INSERT INTO ... VALUES is utilized and the
statement is either created by the user with parameter markers or host
variable(s), or the SQL can and does perform what is IIRC called
/parameter marker conversion/ on the statement to make it a generic
[i.e. make the literal value specifications be treated as parameter
markers for repeated use of the /same/ ] INSERT statement with VALUES()
then a re-used ODP can implement the repeated insert requests, and thus
the ODP will buffer the inserted rows.
However it does say we can control blocking by using OVRDBF and
SEQONLY - which we actually currently do. So if it obeys the override
in term of blocking the write, then I should be set.
The SEQONLY requests that the ODP should carve out room for the
amount of rows in the specified storage size [there is more recently,
the option to specify a number of rows versus a buffer size]. If the
open prohibits buffering, e.g. due to a unique key that must be
maintained and enforced, the database and data management negotiate that
there will be only one row allowed per insert.
Ultimately I could run a test of a huge number of records, so I
figure I'll get that on my to-do list.
The test noted to show same results irrespective of buffering
requests and then a later comment that the file has a PRIMARY KEY
constraint which is a unique access path, explains why requesting to
buffer provided no improvement. The comparison between no buffering and
buffering would need to be made using the version of the TABLE without
the constraint; to see what the benefit could be, if there were no
unique access path. However if such a constraint on the data is
required, whether implemented as a unique keyed LF, a unique keyed PF, a
UNIQUE constraint, or a PRIMARY KEY constraint, the insert will be
forced to one-row per insert. With the unique keyed LF there is the
option to RMVM, but IIRC CHGLF MAINT(*REBLD) is insufficient, because
then the access path is not maintained, but still must be enforced...
Hmmm, upon more thought, I am confident a unique keyed AccPth can not be
changed explicitly to have MAINT(*REBLD). The only valid state, I
believe, is invalid but enforced, and an unenforced state should only be
attained by RMVM, DLTF, or when requested by database internals to
implement something like an ALTER [and an if an unenforced unique accpth
is ever left in that state... Yikes, bad things often happen, because an
open does not look for, and is uninformed of the condition].
As an Amazon Associate we earn from qualifying purchases.