You can also do a web page that includes data from several servers.
Depending on how you need to organize the data, it may be very easy.

sjl wrote:
I'm not sure where to post this, but I'm sure David will correct me if I'm
wrong in posting it here...

I'm working for a multinational company which uses JDE World software.

We are preparing to build an application which will pull inventory data from
5 different LPAR's which contain 52 different JDE environments.

This application will reside in a special JDE environment on one of those
LPAR's.

When asked for what technologies were available, I indicated that we could
use:

1) DDM files
2) Embedded SQL to remotely connect to each system (DRDA)
3) JDBC to access data on remote systems


My preference would be to have a control file which defines the information
necessary to create the DDM files, and have either a CL driver or an RPGIV
program which:

1) creates the DDM file in QTEMP
2) overrides the file used by the RPG program to the DDM file
3) reads the data from the remote file and writes it to a local file in the
special environment
4) close files
5) continue with steps 1 through 4 for each environment

One of the managers of the department which will be using the data (not an
AS/400 - IBM I guy) said that DDM files were too legacy (his exact words
were "1970's") of an approach, and one of our operations guys suggested that
we use the embedded SQL approach instead.

I believe that this approach (using DRDA), although very 'new millennium',
is also going to be much more difficult to implement.

After a meeting with IBM yesterday, they indicated (what I already knew)
when using DRDA that once connected to the remote database that all database
access is now occuring on the REMOTE system, and to read data from the
remote system and write it to a database file on the LOCAL system would
require either:

1) another IBM software component which we don't currently have loaded
(probably big $$$),

2) Design the application such that all of the extracted data is loaded into
a cursor, then connect back to the database in the special environment,
doing fetches from the cursor and inserts into the local database file,

3) create an extract file on the remote system, and once built, use FTP to
send it back to the system on which the application is running

All of which makes (in my opinion) this application waaaaay too complicated.

I'm more for the 1970's approach....

Any thoughts?

Regards,
Steve







As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.