|
Thanks Mark! Your detailed explanation really helps me understand the whole process. Now I am scared. Programmers can run FNDSTR anytime to batch. Depending on what they want to search, the program could read through every source member on the system scanning for that string. What I didn't realize before though was that this program running would cause every member "touched" to be paged into real main storage. The development system is separate from production but still we could contend with each other. Even someone running the FNDSTRPDM as opposed to FNDSTR where they pick a large library to search could page important jobs when finding a string. The job to page all these members (using SETOBJACC) to main memory would be running very early in the morning when no one is here. That might not help though if other jobs later page on top of these. So, that is probably not a good option as you mentioned. Actually, now I would like to keep from paging these members to main memory. We probably just have to make sure we run the FNDSTR as maybe a scheduled job or realize it will slow down the system. Or, better yet, as you suggested, submit the FNDSTR to a separate memory pool. How might I do this? I could make the pool ridiculously small so as not to contend with other jobs as it runs. Expert Cache sound interesting but it must have its drawbacks. Thanks, Craig ** Mark wrote: Hi, Craig: OS/400 uses a single large virtual address space or "single level" storage. In this scheme, real main storage is truly just a "cache" of the most recently used "pages" of objects recently "touched" (used, read or written to). So, based on the above, I suspect that what is really happening is, the first time you run FNDSTRPDM, it "touches" these members, and their contents are paged into real main storage. If your system is not too "busy" and you have lots of main storage, then there is nothing to force those pages to get paged out ("stolen"), and so, the next time you run FNDSTRPDM over the same source files (set of members), you notice that it seems to run much faster. Unless you have vast amounts of real main storage, I do not think it would be a "good idea" to submit jobs to read every member, thus attempting to page them all into real main storage. At some point, you will exceed the total amount of real main storage and this will start to force out other pages, which could be pages for other "production" jobs and users, who may really need that data, and so, you could have a major negative impact on the overall performance and throughput of your system... You could create a separate memory pool, and then run your RPG program that opens and reads all the members, and run that batch job in a subsystem that uses only that one memory pool; that might speed things up... and at least, this way, you limit the total amount of real main storage that you will use, to just that one memory pool. But, the whole time you have this memory pool allocated to a particular subsystem, no one else can use it. So, you are probably best off to just let OS/400 take care of itself... OS/400 also has something called the "Expert Cache" that you can enable... for a given storage pool... that might help, too... Unfortunately, as far as I can tell, SETOBJACC only works one member at a time, so you would have to issue SETOBJACC for each member, and there are probably too many members in a single source file to all fit into one memory pool at a time anyway, so that would sort of defeat the purpose... Hope this helps... Regards, Mark S. Waterbury
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.