|
Pascal, One solution may be to use the DB journaling. We had done something similar. All programs that did the initial update did not have to know anything about the monitoring process. The monitoring processing woke up every x minutes/seconds. It would retrieve from a data area the last record it process from before, add one and then dump out the journal data for that file(s). When it was finished it wrote the last journal record to the data area. Benefits: 1- Keeps from having to build a queue to wake up the monitoring job 2- If there are problems the monitoring program can be started with an adjustment to the starting journal number. 3- Uses the built-in "trigger" functions, journaling, so all you have to do is come up with the programming to extract that data. No worry about a programmer accidentally messing up a trigger program and causing an infinite loop. Problems: 1- Will not, always, instantly process records. 2- If you need multiple monitoring jobs running, I do not have suggestions to make more than one job work over the journal records uniquely. 3- The longer the sleep time, the longer it takes to extract the journal data. Note: there are APIS for working with journals, if you prefer that. See http://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm?info/apis/apifin der.htm and search on "Journal and Commit" category (replace v5r2 with your system level). Thank you, Matt Tyler WinCo Foods, LLC mattt@xxxxxxxxxxxxxx -----Original Message----- From: rpg400-l-bounces@xxxxxxxxxxxx [mailto:rpg400-l-bounces@xxxxxxxxxxxx] On Behalf Of pascal.jacquemain@xxxxxxxxxxxxxx Sent: Monday, May 23, 2005 10:57 AM To: rpg400-l@xxxxxxxxxxxx Subject: Best options for polling a file Hello, Imagine a file where records are created by jobs and are processed by other jobs. The processing jobs do not access all records but only those with the right key field. There are several solutions to get data from the file and to "wake up" the processing job. - Use of data queue or message queue to tell the processing job to "wake up". While not a bad idea, it means programs that create records in the file must also call a program to generate the data queue message or the message. This may be done via a trigger but this adds processing time every time a record is created. - Use a delay wait method (in this case, waiting 0.1 second each time) and check with SETLL or CHAIN that at least 1 record is awaiting processing. This is a "lighter" option than the above on the jobs that create data but can lead to unnecessary use of CPU or I/O (although while running on our system, the CPU and I/O of jobs that have nothing to process are very low if not negligible). Can you make comments or are there other more desirable methods? (We do not want to use either data queues or MQ queues or message queues to store the temporary data). Thanks Pascal The opinions expressed within this email represent those of the individual and not necessarily those of Gullivers Travel Associates (GTA). This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify postmaster@xxxxxxxxxxxxxx Should you wish to use email as a form of communication, GTA are unable to guarantee the security of email content outside of our own computer systems. ______________________________________________________________________ This email has been scanned by the MessageLabs Email Security System. For more information please visit http://www.messagelabs.com/email ______________________________________________________________________
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.