Vinay,
What you are describing  is commonly called an "I/O module" ... It is 
best implemented as a *SRVPGM, but could also be just a *PGM that gets 
called ... If you search the archives at midrange.com for "I/O  module" 
or "Externalizing I/O", you can find several discussions of this 
approach, over the  years...  see 
https://archive.midrange.com and  
select "SEARCH-ALL"...
I/O modules require careful design to get the benefits you are seeking  
...  For example, how will each I/O module "know" which programs need to 
see what versions of the data structures representing the "layout" of 
that  file, as it was, at the time a given program was compiled, versus 
how that file might currently look, after several fields  have  been 
added? You will need to pass some type of "version" number to indicate 
which "view" of the data each calling program wants to see.  And, if any 
mistake is made  and the "wrong" version is requested  by a program, 
the  results could be "bad" -- data  loss or  corruption, etc.,  because 
you will no longer have the benefit of the system-provided "level 
check"  errors to protect  you.
IMHO,  a far better approach is to create new logical views (LFs), as 
follows:
Whenever you need to add a new field to a file, first rename the 
physical file, say, CUSTMAST, to CUSTMAST0, and create a new logical 
file named CUSTMAST (the same name as the original PF) over the 
CUSTMAST0 physical file, with all the same fields and keys as the 
original PF. This will create a Logical File with t he same "record  
format level ID" as the original physical file
So, in this way,  any existing  programs that use "CUSTMAST" will still 
"work" and do not need to be  recompiled. They will now just be using 
the CUSTMAST logical view that will present all of the original fields 
as expected. [Note that you must list all of the fields in t he source 
member for the LF, so that you can use CHGPF to alter the existing PF 
without impacting those logical views.]
Then, you add new field(s) to the  DDS source for the PF, and use CHGPF 
to implement the change.  Then, you create a new logical view (LF), say 
CUSTMAST1 or CUSTMAST2, etc., over CUSTMAST0, to allow any new programs 
that need access to the new fields to have access to the new fields via 
a new logical view.
DB2 (the database) takes care of presenting only those fields in that 
"view" to your programs, as expected -- (After all, that's what a 
logical file or "view" is for -- presenting a "view" of the  data in a 
table or file.)
If you search at   
http://archive.midrange.com   you can find several 
discussions of this approach.   (This method has been available since 
CPF on System/38, but for some reason, it has gone largely ignored or 
"unknown" by too many OS/400 shops.)
I think you will find  this approach will be far less "work" than 
creating I/O modules for each file, and then having to change all of 
your  programs to CALL  the I/O modules.
I hope this  helps,
Mark  S. Waterbury
> On 6/30/2017 9:32 AM, Vinay Gavankar wrote:
At my client, people are kicking around the idea of having a separate
program (module) that will do any required I/O. Existing files already have
tons of programs using them, so it would be done only for new files that
get created.
The idea is that by doing that, if and when a new field is added to the
file, you shouldn't have to recompile existing programs, unless they
require the new field.
Nobody of knows what would be the performance implications of doing it.
Client does not allow compiling files with *lvlchk(*no) and any new fields
are not necessarily added at the end of the file (they want keep the Audit
Stamp fields always at the end of file).
Questions I have is:
1. Is it feasible
2. Is it worth it from performance enhancement/degradation standpoint
3. If yes, how would you actually code it (in RPGLE) so that all programs
need not be recompiled.
Thanks in advance for any advise.
Vinay
As an Amazon Associate we earn from qualifying purchases.