On 29 May 2013 12:48, Peter Connell wrote:
<<SNIP>> Unfortunately, all 300 statements can take about a minute
to complete which is too slow. However, if the same transaction is
repeated then the same result can be returned in about 5 seconds.
Diagnostics from running in debug mode show that a repeated request
reuses ODPs. Unfortunately, when a new work file is created for the
next client transaction, these ODPs get deleted and the transaction
is slow again. <<SNIP>>
The /debug messages/ are a /deprecated/ feedback mechanism; albeit
the identification of the reused ODPs is still valid and relevant feedback.
The obvious answer to effect an improvement to the performance would
seem to be, given that description, to no longer DROP the "work file".
For production activity, always intend and strive only to delete data,
so as to reuse the /objects/ on the object-based OS; i.e. do not delete
and re-create the objects that hold the data, instead delete and create
only the data. Insert and use the data in the work file that is
relevant to the current transaction, then after the work for that
transaction is complete, the row(s) that were previously inserted into
that work file for that transaction can be deleted; perhaps effected
asynchronously, perhaps even scheduled\batched. Note: Re-optimization
is implicit in the DB2 for i SQL, without an option to implement using
the old\saved plan, whenever a referenced database file /object/ was
previously deleted; there is also no feature for maintaining a static
plan, thus no syntax to ask the SQL to use such a plan, which can be the
default behavior in other databases that might trust that just the same
/name/ in a table-reference sufficiently satisfies a presumption of
reuse-capability of a plan... and thus re-optimizations must be
explicitly requested [as scheduled\batched] rather than always
implicitly effected at run-time as detected and deemed required due to
the object being easily recognized as /different/ than when last optimized.
The "work file" could be defined as one or more values-row in a CTE
of an embedded SQL request. AFaIK such a query should be reusable. Or
a stored procedure or a UDTF may be able to be used instead, to replace
a TABLE that is being used as a /work file/ and thus being dropped and
re-created, to provide a non-embedded means to get better performance.
<<SNIP>> I am perplexed by the fact that we have a developer familiar
with SQL server who has demonstrated that he can easily create a
script that runs all 300 statements on SQL server in just a few
second with no performance problems. Why is that?
The 300 statements may be optimized for that environment. Often the
DDL that defines what exists in another environment does not match what
was defined to the DB2, thus putting the DB2 at a disadvantage; e.g. as
is often the case, the PRIMARY KEY constraints and RI may be missing
from the DB2 setup, yet those often exist for the setup on another DB.
Specifically optimizing the process in a DB2 for IBM i SQL environment
may be able to effect desirable performance characteristics for the work
performed by the DB2 for i SQL. As already noted, that may best be
accomplished by *not* deleting the /work file/ which could be effected
by implementing the data for a temporary result\work-file in a manner
differently than what those 300 statements currently do.
As an Amazon Associate we earn from qualifying purchases.