|
David and Paudie: ...Things that fill a journal ... I like the "analyze an outfile" approach. Don't record opens and closes unless you need them. Don't record both before and after entries unless you are using commitment control or have another good reason to do so. Don't journal every file on your machine. Select the libraries or tables that are journaled. I wouldn't bother recording changes to a developers test library. Check your SMAPP setting. Use the EDTRCYAP command. SMAPP is "system managed access path protection". This is the feature that automatically journals access paths for large indexes. If the machine crashes, SMAPP is trying to make sure that you don't have to rebuild all of your large access paths - this protects you from multiple-day recoveries. If a physical file is journaled, the SMAPP entries will go into the same journal. If a physical file is not journaled, the system creates a journal and puts both the physical and logical changes in there (this isn't perfectly correct but close enough). Okay, your journals could be growing rapidly because your have one or more large files that are heavily updated (inserts, updates, or deletes) and the stuff in your receivers is SMAPP data - access paths automatically journaled by the machine. There are parameters to control how the SMAPP data is managed in user journals. Look at the CHGJRN command or WRKJRNA. There are journal receiver size thresholds. If yours are set to small values, you could be changing receivers three times per day. In my experience, until you are changing receivers several times per minute you don't have too much to worry about. Of course, this depends on your machine size. Big machines can handle more journaling than smaller systems. ...DataMirror uses a lot of capacity ... Journaling can be inconvenient if your machine isn't configured for it. I have spent some quality time looking at Visions Solutions OMS and ODS and at Lakeview Mimix and its audit reader process. Someday, I would like to spend time with each of them working to optimize their code. Oh well. I haven't taken the same opportunity to look at DataMirror but I have a patent in this area and, several years ago, DataMirror spent a lot of time trying to convince me to recommend their product to my customers. The original concept behind DataMirror was "like the other mirroring products but with the addition of field and record manipulation functions" - in other words, a high-function mirroring product. If you have some of that function turned on, DataMirror will stop being a high-speed-low-drag mirroring application like OMS and Mimix and it will use more CPU cycles. All the AS/400 mirroring products exist because the AS/400 does not support the concept of shared DASD. Changes have to be copied to another machine and applied to a copy of the database. If DataMirror is using a lot of cycles, it could be encountering a lot of database changes. Look to your batch update jobs. They usually perform far more updates than interactive. When you are using the outfile technique that David Shaw suggested, look at the job name that is creating most of those changes. It could be that one batch job is carelessly updating a huge file and filling your journals. That's all I have time for right now. I hope that this helps. Richard Jackson mailto:richardjackson@richardjackson.net www.richardjacksonltd.com Voice: 1 (303) 808-8058 Fax: 1 (303) 663-4325 -----Original Message----- From: owner-midrange-l@midrange.com [mailto:owner-midrange-l@midrange.com]On Behalf Of Shaw, David Sent: Thursday, August 17, 2000 7:54 AM To: 'MIDRANGE-L@midrange.com' Subject: RE: Journal Reciever Question -----Original Message----- From: ORiordan_Paudie@emc.com [mailto:ORiordan_Paudie@emc.com] <snip> Now our journal reciever fills up quite quickly 2 to 3 times a day, this causes DataMirror jobs to hog alot of CPU as they run at priorty 20, we usually drop priorty on these jobs to priorty 99, but this does not ease the problem What I would like to know is how to figure out what journaled files populate the journal reciever so quickly everyday, and if anybody is familiar with DataMirror, how to load balance the subsystem for optimum performace during the day. Should we stick the datamirror subsystem in it's own shared memory pool etc... Any ideas? ------------------------------ To figure out which files are being updated so often, do a DSPJRN to an outfile for a typical hour or two, then query the outfile for counts of the number of "hits" per file. I used to work with Transformation Server at my old job. If it's really consuming a substantial amount of your capacity, then isolating it to its own memory pool probably would reduce its impact on your other normal processes, and may allow it to run more efficiently as well. This presumes, of course, that you have sufficient memory in the box to be able to support this properly. We didn't run DTS in its own pool, although we did set the jobs' run priority to 50 and put it in the batch pool with other production batch jobs. It ran okay for us that way. We did have virtually all non-system jobs in pools other than *BASE, and that helped us quite a bit. Dave Shaw Spartan International, Inc. Spartanburg, SC +--- | This is the Midrange System Mailing List! | To submit a new message, send your mail to MIDRANGE-L@midrange.com. | To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com. | To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com. | Questions should be directed to the list owner/operator: david@midrange.com +--- +--- | This is the Midrange System Mailing List! | To submit a new message, send your mail to MIDRANGE-L@midrange.com. | To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com. | To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com. | Questions should be directed to the list owner/operator: david@midrange.com +---
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.