Hi,

Instead of trying to filter the job log for constraint violation errors, it
might be easier to farm the QSYS2.SYSCST, QSYS2.SYSCHKCST, and
QSYS2.SYSREFCST tables to get the constraint rules, and dynamically
generate validations to match those rules (along with issuing more
reasonably understandable errors to the user). In the case of unique or
referential foreign key (FK) constraints, this means two I/Os would be
performed instead of one (one by your code and one by the database to
validate the unique or FK rule).

I do a lot of that stuff, it works very well, but not in a use case that is
likely anything like yours. I migrate data to a database with constraints
turned off (for performance reasons), turn on constraints at the end, and
then run a single, generic SQL script that captures a set of tables
containing all rows with constraint violations across the entire database
(using dynamically generated SQL). All constraint violations are isolated
for me, so I don't have to hunt for them.

For interactive applications, the big question is whether or not you could
achieve acceptable performance with the added overhead of farming the
system catalog constraint rules, coupled with dynamically generating and
running the associated validations. I don't know the answer, since I've
never tried that in an interactive application.

On the other hand, I occasionally see verbose CHECK constraint rules that
might be hard to "pretty up" for the user, no matter how you attack the
problem (up front or via job log). Most CHECK rules that I see though, are
fairly simple and understandable.

There is some benefit, albeit with some added overhead, of knowing that
your transaction that is about to occur, is "ready" to be accepted by the
database, or not, versus the alternative of not knowing, tossing it at the
database, and trying to figure out why it was rejected.

Also, consider a hybrid solution:

- Don't validate up front, and
- Attempt the transaction, and
- If the transaction fails, then farm the system catalog constraint
tables to find the offending rule, instead of the job log. If the
transaction was rejected for constraint violation(s), you should be able to
find which rule(s) were violated this way.
- This might be the best of both options:
- No added, up front, duplicate I/Os performed for unique and
referential constraints for transactions without violations, and
- The code to farm the constraint catalog tables will be much cleaner
and likely more stable than trying to make sense out of the job log.
- For interactive applications, I think the most difficult part would be
tying variable names to column names. If your variable names match the
column names, that is easy. If not, then not so easy.

Mike


date: Wed, 17 May 2017 10:02:35 -0600
from: Nathan Andelin <nandelin@xxxxxxxxx>
subject: Re: Some database discussions...


There is always the GET_DIAGNOSTICS statement.


I looked at the Knowledge Center documentation for GET_DIAGNOSTICS:

https://www.ibm.com/support/knowledgecenter/ssw_i5_54/db2/
rbafzmstgetdiag.htm#getdiag

It's quite daunting, and doesn't fit the need. In addition, it only
pertains to DB updates via SQL.

Another concern is that IBM error messages are pushed to call-stack message
queues, which may include various types of messages (*DIAG, *ESCAPE, etc.).
Applications can never count on retrieving the right message from the right
queue.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.