Hi, Dave:
Another way to control the "scope" of sharing files (and shared ODPs,
etc.) is by using Activation Groups. Specifically, you can create a
_*NEW_ or _named_ activation group where a given set of one or more
files are opened, and one or more programs or service programs are
activated. By having your service program(s) that service those files
run in their own activation group, you can have more explicit control
over the scope of those resources (open files, ODPs, buffers, etc.).
If you always want a given service program for a given set of resources
(files etc.) to be "shared" within your job, you could create that
*SRVPGM to run in its own named activation group (e.g. where the name of
the ACTGRP could be the same as the name of the *SRVPGM)..
Another alternative is to always specify ACTGRP(*CALLER) on the
CRTSRVPGM command, and then, if you have a program that needs to call
this service program to "open" its resources (files) in a "non-shared"
mode, just create _that_ (calling) program to run in its own _*NEW_ or
_named_ activation group, and that way, a separate instance of those
*SRVPGM(s) needed by that program will be activated into that AG.
Note that the executable code (instruction stream) for *PGMs and
*SRVPGMs is re-entrant, so that is always inherently shared. In other
words, only one copy of the executable code is loaded into virtual
memory., no matter how many activation groups (or even multiple jobs)
are using that code But each "instance" or "activation" of a program
causes some (e.g. static) memory resources to be allocated and
initialized for that activation, and those resources are "scoped" to the
activation group.
_Data queues - server jobs_
Another possible "old school" approach for truly sharing file resources,
even across job boundaries, is to create a "service job" (aka.
"never-ending program") that listens (waits) on a request data queue,
and sends responses to response data queues. Since you mentioned the
idea of "encapsulating" each file with a *SRVPGM, in a previous post,
presumably to have one central place to enforce your "business rules",
etc., this approach gives you a way to do that, but with the benefits of
"sharing" all of those resources (open files, ODPs, buffers, etc.)
across multiple jobs.
You could use a "thin layer" of access procedures in a' *SRVPGM to
simplify access to the relevant data queues within each of the programs
that uses those files. Or you could even use RPG SPECIAL FILEs or Open
Access for RPG (OAR) to provide the "wrapper" for the data queues logic,
and that way, you make only minimal changes to all of your existing
programs, as far as the FILE I/O logic is concerned.
NOTE: If you decide to go in this direction of "encapsulating" file
access, it should be beneficial to consider designing this access layer
along the lines of the various "business objects" that are represented
by the underlying files, rather than continuing to access each file
directly (through its own I/O module). As an illustration, suppose we
have a typical "order entry" application with an Order Header file and
an Order Details file. Instead of having each order entry program access
the ORDHDR and ORDDTL files directly, you could create an "access layer'
for the "order" entity -- the "business object" known as an "order"
-- and this (service program) layer provides procedures like:
createNewOrder
addItemToOrder
removeItemFromOrder
generateOrderInvoice
printPackingListForOrder
etc.
In this way, you isolate the "navigation" (e.g., chaining from header
to detail, or vice versa), and even the knowledge that there are (in
this case) _two_ files or tables used to implement "orders", from the
rest of the application, which now deals only with a "business object"
named "Order"..
Search the RPG400-L mailing list for the subject "i/o module" and you
can find a thread about this approach, starting on 2008-05-28:.
http://archive.midrange.com/rpg400-l/200805/msg00543.html
and further down in that thread, my response:
http://archive.midrange.com/rpg400-l/200805/msg00575.html
See also this Redbook:
http://www.redbooks.ibm.com/abstracts/sg246393.html
Hope this helps,
Mark S. Waterbury
> On 7/1/2012 9:26 AM, Dave wrote:
2012/6/30 Jon Paris<jon.paris@xxxxxxxxxxxxxx>:
On 2012-06-30, at 1:00 PM,midrange-l-request@xxxxxxxxxxxx wrote:
Wow, that's cooled me off a little, I must say!
If you are on V6 you can get the benefit of a single service program without the pain by passing the file as a parameter to the procedure.
But WITH the pain of analyzing and modifying all those procedures!!
As an Amazon Associate we earn from qualifying purchases.