|
Venu, I can't speak to the OPTIMIZE parameter but the blocking is a good idea. The main thing you want to accomplish is to have the processor(s) fully busy with the aging and the disks fully busy getting and putting data. You will want to do two things: 1) Process the file SEQUENTIALLY (no K in the F spec plus OVRDBF yourfile NBRRCDS(xxx) SEQONLY(*YES xxx)) this loses all the overhead with the index as well as allowing the system to actually get blocks from disk to memory and from memory to your program. 2) Use a SANE blocking size (the xxx above). Remember that OS/400 WILL HONOR your blocking, if you say get 10000 records it WILL. Where it will put them is the problem, it may need to page out your ENTIRE APPLICATION to get the data into memory, do you see a problem? You want the blocking to be big enough to be helpfull and small enough to fit in memory with your application. Most likely a blocking factor between 100 and 500 will be sufficient. Additionally with a really big file you may want to have job A do records 1-100000, job B do records 100001 to 200000 etc. Especially if you have multiple CPU's this will greatly speed up the process. Larry Bolhuis Arbor Solutions, Inc lbolhui@ibm.net VENU YAMAJALA wrote: > Hi All, > > Recently we wrote a DateAging application. The application uses multi > threading. Each data file in the user library is aged simultaneously as > independeng aging jobs. We used this to enhancethe performance. Earlier >version > was only single thread where the application used to age one datafile at a > time. Although we gained a lot of improvement with the new design, we still >are > facing some difficulties particularly with files of bigger size. > > We have this application written in ILE and we are having different modules to > do different tasks, we have service pgms, binding directories...all that >stuff. > We have compiles the main program as > > CRTBNDRPG with OPTIMIZE(*BASIC). > > We are still facing some performance problems. A data file of 1 million >records > is taking more than 15 hrs to complete the job. Is there anything to do with > this OPTIMIZE(*BASIC). I read in the ILE manuals that there will be >significant > improvements if we change this to OPTIMIZE(*FULL). But I dont know what > improvements it is referring. The majority of time is wasted in the I/O of the > datafile. How is this OPTIMIZE parameter going to influence the data file >I/O?? > The manual says that there will be significant improvements with > OPTIMIZE(*FULL) but there will be some problems also. What are these > difficulties?? > > We are also thinking of chaning the record blocking factor for the data file. > That is, before calling our RPG program, we will do a OVRDBF of the datafile > with NBRRCDS as say 10000. Will there be any improvements and what are the > possible problems we will face if we change the NBRRCDS parameter? > > Any ideas or thoughts,. Thanx in advance for any help. > > Rgds > Venu > > +--- > | This is the Midrange System Mailing List! > | To submit a new message, send your mail to MIDRANGE-L@midrange.com. > | To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com. > | To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com. > | Questions should be directed to the list owner/operator: david@midrange.com > +--- +--- | This is the Midrange System Mailing List! | To submit a new message, send your mail to MIDRANGE-L@midrange.com. | To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com. | To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com. | Questions should be directed to the list owner/operator: david@midrange.com +---
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.