I thank you all for the insight and great suggestions.

One question to which I did want to respond was with regards to the
number of records that are updated.  The answer is: it's variable.  It
could be as many as 100, or less than ten.  It all depends on what is
being shipped at the time.

Again, I appreciate all responses.

Brian.

-----Original Message-----
From: Rich Duzenbury [mailto:rduz-midrange@xxxxxxxxxxxxxxxxxxx] 
Sent: Wednesday, December 14, 2005 1:38 PM
To: RPG programming on the AS400 / iSeries
Subject: Re: Updating a Program using SETLL/READPE - Suggestions

On Wed, 2005-12-14 at 11:09 -0500, Brian Piotrowski wrote:
> Hi All,
> 
>  
> 
> We have a PF that has over two million records in it (and it grows
> daily), that uses a SETLL/READPE command.  I know there has been much
> debate as of late on the use of these commands, and I wanted to
present
> a bit of could for your perusal and suggestions.  Right now, the
system
> is very slow at updating records (sometimes in excess of five
minutes),
> and I suspect it has something to do with this code:
> 
>  
> 
>      #SRUPD        BEGSR                                              
> 
>                    TIME                    TIME6             6 0      
> 
> *(SST22)                                                              
> 
>      KEYT22        SETLL     SST22                                    
> 
>      *IN99         DOUEQ     *ON                                      
> 
>      KEYT22        READE     SST22                                  99
> 
>    99              LEAVE                                              
> 
>                    MOVE      '1'           SRADNF
> 
>                    Z-ADD     *DATE         SRADDT
> 
>                    MOVEL     TIME6         SRADTM
> 
>                    UPDATE    RSST22              
> 
>                    ENDDO
> 

I prefer dow to dou because there is one less test and branch.

 /free
    setll keyt22 sst22;
    reade keyt22 sst22;
    dow not %eof(sst22);
        SRADNF = '1';
        ...
        update RSST22;
        reade keyt22 sst22;
    enddo;
 /end-free


You could try changing out read for reade, and then test each key to see
if  that helps performance:

 /free
    setll key22 sst22;
    read keyt22 sst22;
    dow not %eof(sst22) and key1 = fld1 and key2 = fld2;
        SRADNF = '1';
        ...
        update RSST22;
        read keyt22 sst22;
    enddo;
 /end-free

You might want to encapsulate the read and test into a function, as in:
 /free
    setll key22 sst22;
    dow get_sst22_record(key1: key2);
        SRADNF = '1';
        ...
        update RST22;
    enddo;
 /end-free

(get_sst2_record function)
 /free
    result = '0';
    read sst22;
    if (NOT %eof(sst22)) and key1 = fld1 and key2 = fld2;
        result = '1';
    endif;
    return result;
/end-free

If this is updating a large set, you might consider using an SQL
statement, and let the system determine the best index to do the update.
You don't say in your message how many records the reade loop is
processing, so it's very difficult to say.

You might also consider doing some rudimentary profiling by having the
program record into a file the timestamp when it hits the beginning and
ending of important subroutines, so you can review the data and see
where the bottlenecks actually arise.

One more bit, I had a similar performance problem on a program just last
week.  I had 1800 people, that had 40,000 accounts, that had in turn
100,000 transactions.  My job was to find just a few 'important' and
recent transactions for just those people.  

Initially, I coded a READE loop on the accounts and an inner READE on
transactions, which made the job run for half an hour, instead of one
minute.  While READE might have a performance issue, READ with test
wasn't going to make it much better.

Instead, I worked backwards.  I went to the transaction file and copied
all recent important transactions to a work file, assigning the person
ID to them as I went.  In my case, I wound up with OPNQRYF, though I
also have a version in embedded SQL with little performance difference.

Then, I updated the program to use the work file, which being keyed by
person ID, now runs in almost linear time.  Run time is about half a
minute to grab transactions, and one minute to run the rest of the job,
rather than a half hour dog.

The point is that, perhaps, you might also find another avenue to get
the update completed.

HTH

Regards,
Rich



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.