|
The system limit for the number of spooled files per job used to be hard coded at 9999, which is to say that after spooled file number 9998, the next attempt to create a spooled file results in an escape message (I don't remember which) and a job log. A few releases ago, IBM added a system value QMAXSPLF, which defaulted to 9999, but can go up to 999999. This can only be regulated at the system value level, no at the job level. Note, there are restrictions as to the maximum number of spooled files in *SYSBAS in various states (That is to say *RDY, *HLD, etc [not New York, Minnesota, Kansas, or even Euphoria], I don't remember the particulars), but the total limit in *SYSBAS is 2.3M. In an IASP (which I hate for reasons that have nothing to do with spooled files), you can go to 10M. Submitting a new job to produce each spooled file would be a disaster in overhead, particularly if you created job logs for each job, not to mention that your job number would wrap daily if you pass 999999, although it would work. One technique for this, if 999999 isn't enough would be to have the job count the number of spooled files generated, and then submit a new job before you get to 999999. I don't know of an easy/efficient way to see the number of spooled files generated by a job, but possibly there's an API. There is no system command or TAA Tool command that I know of to do this. Regardless, beware of the 2.3M limit, or bad things will happen (like the flying monkeys in the Wizard of Oz). Al Al Barsa, Jr. Barsa Consulting Group, LLC 400>390 "i" comes before "p", "x" and "z" e gads Our system's had more names than Elizabeth Taylor! 914-251-1234 914-251-9406 fax http://www.barsaconsulting.com http://www.taatool.com http://www.as400connection.com sriedmue@xxxxxxxx m Sent by: To midrange-l-bounce midrange-l@xxxxxxxxxxxx s@xxxxxxxxxxxx cc Subject 03/30/2007 01:00 Job cannot create any more spooled PM files Please respond to Midrange Systems Technical Discussion <midrange-l@midra nge.com> I have a CL that I wrote which runs in a loop. It generates a spooled file, copies the data down to a PF, and then deletes the spooled file. There is a 60 second delay inside the loop so I can gather this data every 60 seconds. After running for a long time, it eventually blows up on CPF4167 Job cannot create any more spooled files. Even though I am deleting the spooled files as I go, they remain attached to the job in a FIN status. These spooled file "ghosts" count towards the total number of spooled files, and therefore my job eventually blows up. Is there any way around this, short of having my job perform a SBMJOB every 60 seconds, rather than generating the spooled files itself? Thanks. Steve -- This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options, visit: http://lists.midrange.com/mailman/listinfo/midrange-l or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at http://archive.midrange.com/midrange-l.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.