This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.
--
[ Picked text/plain from multipart/alternative ]
Chris,

The main process that happens all day is initiated by a trigger over a
table, it is an INSERT trigger which calls a CL that pushes the contents of
the trigger record to a data queue. The contents of the record are
Timestamp, Key, and process type code. The CL adds a token to identify the
environment that originated the transaction (we have three Test Quality and
Production). There is a deamon CL running for each environment that monitors
the data queue for work, when something arrives, the queue is read and the
data passed to a dispatching program that assembles a SBMJOB command and
uses QCMDEXEC to send the job to the subsystem job queue. Up to this point
there is no worry about mixing apples and oranges but once in the job queue
things become interesting.

There are at present ten different processes that retrieve data from the
400, for those tables that return n rows, a CL is called that does a
STRQMQRY and outputs the rows to a temporary file, the file is immediately
read and the data placed on a data queue. When all of the info on the 400 is
retrieved, the connection is switched to the mainframe and processed. For
those tables who's data is on a data queue the programs retrieve the entries
until the size=0, then press on. The queues themselves are simple FIFO, no
key no additional info requested.

The queries, files and queues are shared by each of the processes where the
same target table is needed and since each process may be doing something
different with the data, for example one process might be adding a row, the
other might be deleting a row (or rows), my dilemma is if I allow multiple
of these processes to run , the data could perhaps become blended in the
queues, or worse, if another process begins prior to another process getting
its data, another process could potentially overlay the file (although this
would need split nanosecond timing).

But you said something about keys, I have no experience with that, how does
that work?

-----Original Message-----
From: Chris Bipes [mailto:chris.bipes@cross-check.com]
Sent: Thursday, September 12, 2002 4:28 PM
To: 'midrange-l@midrange.com'
Subject: RE: Multi threading Data Queues


If you have different data on a queue being processed by different
application, you can key/fifo a data queue.  The key would identify the type
of entry and each processing program can read only its key and get its data
fifo.  Otherwise you are spending a lot of time reading / sending data queue
as the wrong processing program places stuff back on the queue.  And yes it
will be at the end of the queue, not back where it came from.

Data queues are not single threaded.  You can have as many jobs sending /
receiving from a queue as you want.  If the data MUST be processed serially,
I would recommend one job reading from the queue and handling all data types
by calling the necessary processing program.

Now if it does not matter what order thing are processed in, you can have
user entries use a lower key value than the over night batch processes.  By
reading a keyed data queue with a key value GE *Lowval you will get the
lower key entries from the user input screens before the over night batch
job entries.

What are you doing?

Christopher K. Bipes      mailto:Chris.Bipes@Cross-Check.com
Operations & Network Mgr  mailto:Chris_Bipes@Yahoo.com
CrossCheck, Inc.          http://www.cross-check.com
6119 State Farm Drive     Phone: 707 586-0551 x 1102
Rohnert Park CA  94928    Fax: 707 586-1884


-----Original Message-----
From: Weatherly, Howard [mailto:Howard.Weatherly@dlis.dla.mil]

Folks,

I am trying to figure out something here, I have a cross platform
application using DRDA between the 400 and a mainframe, currently the jobs
are queued to a subsystem job queue which is set to only allow one job at a
time. The reason is that we used QM Queries and data queues to select nTuple
data that is needed to update the mainframe.

Part of this process runs early in the morning about 4am and loads up the
processing queue with perhaps 1500 to 2000 requests. This gets the normal
user entering things manually behind all of the n(k) transactions in the
queue.

I want to run more than one of these entries in the subsystem at a time but
I am drawing a blank on the best way to set the data queues to be multi
threaded.

What happens if a program reads a entry from the queue, decides it does not
belong to it and places it back on the queue, will the end-of-queue (entry
size) still be set at the proper place even though an entry has been
replaced on the queue? Or is there some other incantation I can use to
multithread the data queues?
_______________________________________________
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@midrange.com
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/cgi-bin/listinfo/midrange-l
or email: MIDRANGE-L-request@midrange.com
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.