The [control] information stored in the member is unique to the spool file that was generated. How much data any one requires is not fixed, it is variable. The [indexed] system control block of all spooled data wants to store only enough to locate the spool data\details location; i.e. only the pointer to the member. Without the member as storage for the splf-unique data, either the control block would have to be bloated with that additional variable length data [thus no longer slim memory of only the keyed data & pointers to members\data], or that splf-unique data would have to be located in another place; i.e. place other than as an effective /part of/ the spool container, which is the member object which the pointer addresses.

That is what I alluded, makes the current *MEM implementation convenient; i.e. to store the splf-unique data as /part of/ the container, but not as part of the data itself. The *STMF has some attributes and text fields to describe the /stream file/ object, but that is very limited in what it provides; in part, to reduce its overhead [memory footprint]. To keep the pointer as locater to a *STMF instead of a database *MEM, the splf-unique details would really best become part of the data, or the *STMF would be generally less efficient [for non-spooling] for having to become even more like the DBM. Perhaps the database member always should have been similarly slimmed, but the DBM gives higher function for more versatility in what it can accomplish; e.g. the compilers, binder, various spooling, system-only access, & the RLA DB I/O to program interface all are highly dependent on, and more functional for, the additional data the member contains for them.

That there are multiple files and multiple members would likely go unchanged from a STMF perspective; i.e. multiple STMF in multiple directories would likely be similar. Just as having very many members as containers under one file [current scheme is 2K IIRC], there are often similar reasons under directories to impose limits. And in any case the same concept of the object-based system saving addresses and limiting object creation by only truncating the data portion of an object, would persist; i.e. object deletion and creation\instantiation is inherently more expensive than truncate and re-use.

FWiW spool data as stream data is effectively limited to binary access. Because "delimiters" as part of the stream may be part of the spool data, so "delimited records" are not generally feasible, such that data must be read in chunks of storage versus as records. What explicit advantage would there be to read those records as a binary stream versus contiguous rows as a virtual stream, except to [as in the perspective] a language which might be better suited for one access method versus another? I am not aware of any limitation of storing data encrypted or compressed in the dataspace of a member versus the dataspace of a *STMF, nor recall many specific anecdotes of problems which would suggest the choice of the *FILE+*MEM+*QDDS is generally recognized as being inferior to an alternative of the *STMF+*MEM+*QDDS. One issue I am familiar with is [mis]use of *ALL libraries in requests versus *ALLUSR, where the database is poorly designed for its insistence on *SHRNUP lock on the *FILE to prevent new members from being added to an existing spool database file while a member list is being generated, but the spooling simply moves on to [or even creates] another file as recovery from that condition.

Regards, Chuck

Alan Campin wrote:
I won't suggest that you store the control block information in
the stream file, just the data. All the control block information
would stay in the a standard table or at least that I how I would
do it. My point is that spool data is inherently stream in
nature. Why not store it in a stream file instead of the
crazyness of creating new files and multiple members and data between members and data between files, etc, etc. It would seem
that would solve conflict issues that exist currently. You could
also store it compressed and even encrypted if you wanted. I am
referring to problems discussed in previous post to the forum.

Since all spool I/O is through a file interface won't you be
making changes in one place? There may be issues I don't know
about.

On Thu, Sep 3, 2009 at 4:24 PM, CRPence <CRPbottle@xxxxxxxxx>
wrote:

Alan Campin wrote:
Bummer. I wonder why IBM holds on to that system when they have the IFS with all of the problems they have with QSPL?

On Thu, Sep 3, 2009 at 1:12 PM, Pete Massiello

NO, they still use QSPL libraries

Not sure what problems alluded.... And is that question asking
why not use stream files instead of database file in QSPL
because all of the problems with QSPL, or why not use stream
files because those are equivalently problematic as QSPL ;-)

Unless a problem was specific to the implementation object, changing what object [type] is used would probably be moot in
most cases.? What problem(s) would justify a rewrite from
using database to stream I/O? If the common data management
allowed a transparent redirect, then perhaps not a huge deal
[i.e. effectively no change, but an override to STMF], but
otherwise difficult to justify.

FWiW One reason database members had been used is because the spooling also supports job /spooling/ from fixed record length
with support for *SRC & *DATA inline //DATA which of course are
the two types of database physical file members. That part
would probably remain unchanged even if the implementation
object changed. IIRC there was also for some time an issue
whereby stream file limits were too small to hold the largest
spool files in one object; the original spool control block was
designed to have one spool file entry to point to the
description and data of each, with just the one pointer.

The database member implementation object also enables storing control information beyond just the /object/ information [e.g. owner, text, expiration, et al]. If stored in stream files,
then all accesses would have to be binary anyhow, so those
details could be stored before or after the actual spool data
into the same space; the maximum amount of spool data in a file
would be limited by the control information, but now that
stream files can be so large, hardly a concern anymore. But if
the control information were just stored in the data [instead
of in the object], why not just rewrite the OS to work like a
PC ;-)

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.