Hi Rich
In the context of providing/exchanging  data via HTML - say with SOAP, Web
services or some kind of process that hoovers up HTML into a database-  it
seems to me that the idea of sending 5000 records over the web is not as
unlikely a scenario as it might seem.
Certainly there are some data transfer needs out there.  That wasn't the
impression I got from the Buck's request, though.  Human operators are good
with a 'screenful' at a time.  If he is really trying to move all the data,
as in some kind of batch request, then I would consider constructing what
an earlier poster to this thread proposed - a custom socket server.  They
aren't terribly difficult to write, even in RPG, and they are supposedly
easy to read from Java.
But the services I am talking about are standard and won't give the network
and firewall guys furrowed brows. Yes you can write a custom socket
routine, but why would or should you if there is a standard, agreed way to
do this over the wire (unless we are talking totally in-house).

In the web world I see custom sockets programs  and the like as something
of a red herring; you then have to go back and agree the whole
protocol/handshake thing whereas the HTML protocol (and all the associated
bits now built around it) already does all that for you: hence the
attractiveness of using that as a delivery medium despite the inefficiency.

I am reminded of all those times when I ended up transferring data on
diskette in CSV formats even when I knew the people I was dealing with
could read the data in a format that was easier for me to produce: it just
wasnt worth explaining to them how to do it.

I didn't necessarily gather from Buck's post that there was or wasn't a
human involved - I just made a comment that I could see a situation where
5000 records was not as impractical as it seemed on the surface of the
question.

Given Buck's original request, I wonder if there is a way to send the first
page of the HTML request (equivalent to the first page of the subfile) and
somehow continue extracting the remainder of the data and cache it so that
the query is executed once, but subsequent extracts do not have to repeat
the database access.
Many things can be cached.  Some web setups cache pages.  Some have caches
of persistent database connections.  I'm sure that a database recordset
could somehow be cached, but I've never tried it.   One reason why I
haven't messed with building caches is that -- on an iseries anyway -- they
seem perhaps as likely to wind up back on disk as they are to stay resident
in memory.  So, why bother?  Also, data in caches does go stale -- and I
usually want my users to see the most up to date data.
Depending on what kind of object it is cached in access time might be
affected (I'd have to do some testing to examine this) you can walk through
a user space pretty damn quickly using a pointer; SETOBJACC also offers
some control over how this happens but I'm getting off the track: I think I
pretty much agree with you about  caching data as well as the points you
made about data currency.

Thinking about what I was proposing a better term might have been buffer.

Regards
Evan Harris



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.