its creating a work file, its clearly the wrong place for a trigger. I
took a consultant advice.


On Fri, Aug 8, 2014 at 2:42 PM, CRPence <CRPbottle@xxxxxxxxx> wrote:

On 05-Aug-2014 10:16 -0500, Briggs, Trevor (TBriggs2) wrote:

<<SNIP>> if Job A issues an update that causes a long-running trigger

to fire and then Job B issues an update to the same file, then Job
B's program will wait (the default wait time for the file?) and, if
it cannot successfully update in that time, the update operation will
fail?


No. A Job-B that performs an update to a different row of the same file
that Job-A had updated [and for which the trigger program was activated in
Job-A], will be unaffected by the trigger activation in Job-A. They are
distinct[ly triggered] events that will operate concurrently just like any
two distinct updates [non-conflicting; not to the same row] of the same
file in different threads\jobs.

The other thread\job will wait for any particular row being updated in
the original thread\job, but then, the wait is the Record Wait (WAITRCD)
rather than File Wait (WAITFILE) or Default Wait (DFTWAIT) time. Trigger
activations are allowed concurrently; effectively the same as I/O without
triggers, so the two jobs operating against different rows have no conflict
with regard to triggers generally. Only if the trigger program was coded
to do something specific for which some concurrency is a problem [or
otherwise maybe the concurrent work just exhibits inhibited results due to
some contention that allows sufficient wait time to alleviate the
contentious work while not also exceeding the minimum length of time
allowed for a triggered operation to complete].

So at issue is, that whatever the trigger program does [what the
programmer coded the trigger program to do], may or may not allow for any
particular level of concurrency; as user-coded, the onus is on the
programmer of the trigger to understand what concurrency issues their code
[which just happens to be encapsulated in a trigger program] might have.
And in the example from the OP, part of that work is a CLRPFM [obviously
of some file member other than the one for the triggered I/O] which hardly
allows for concurrency; that work must be serialized at the member-level
for the data, not at the record-level. The programmer of the trigger
program has the option to resolve that potential issue however they see
fit; again, the DB has no issue with concurrency due solely to a trigger
program being defined and active.


So, that is potentially even more dangerous than just causing a
slow-down in updates. My gut feeling is that you can never just "bolt
on" a trigger program to a file without evaluating the potential
effect on every program that touches the file.


Any function\code change should be reviewed and tested as such; for
whatever impacts that code change will have on other programs. In the case
of a trigger operating at database-level, then any program that interfaces
with that file for the triggered I/O has _a potential_ to be impacted. But
for typical business logic that was moved from a program into the database
to be performed via a trigger, the impact often is limited mostly to those
programs that [might] perform that same or similar business logic that has
been moved into the trigger; for the most part, changes to avoid
duplicating\doubling the effect of any work that was moved into the
trigger. The slight delay for the triggered I/O and then either a nearly
unnoticeable amount of time for perform the actual triggered work or even a
somewhat extended delay for more complex work are usually /noticed/ by
other code only as a slightly longer record lock time-held such that not
even increasing Wait Record Timeout (WAITRCD) is required because the
delay\wait is just slightly increased; except old database files created
prior to that attribute being increased to 60-seconds, for which IIRC the
WAITRCD(*IMMED) might have any slowdowns registered as record-lock timeouts.

And for most triggers implementing some business logic, there is often
little concern except for impact to batch operations [i.e. a "slow-down"],
for which deactivating or removing the trigger is often desirable when also
running both with the assumption that the batch work does not require the
business logic and the batch access prevents any concurrent [non-batch] I/O
that should have the business logic applied; the effect of the business
logic that was pushed down into the database may be undesirable to be
performed for certain batched updates, either because the batch updates
have been coded explicitly to perform that same business logic already
[knowing the triggers are best deactivated in advance] or because
performing some _external actions_ [such as sending email notifications]
might be inappropriate within batched updates as contrasted with when those
updates transpire during normal operations.

--
Regards, Chuck

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.