• Subject: RE: Dynamic arrays
  • From: "David Morris" <dmorris@xxxxxxxxxxxxx>
  • Date: Fri, 30 Apr 1999 11:33:36 -0600

Joel,

I have done some bench marks on similar processing.  In my case the 
file had a lot more than 50 records (about 10,000), and I arbitrarily 
created a buffer that contained 128 keys/data.  I also stored the last 
used key/data.  I saw the total processing time for this area in the code 
drop to 1/6 of a conventional indexed retrieval.  It may be that creating 
a memory resident file of frequently used records may have 
accomplished the same thing at a lower cost.

I have been experimenting with different cache algorithms to see if I can 
improve this number.  In most cases this extra work is not worth the trouble, 
just buy a faster machine or more memory.  At this point my experiment 
is more of an academic pursuit.  Hopefully I can learn enough to understand 
when it is really worth the trouble.

David Morris


>>> Joel Fritz <JFritz@sharperimage.com> 04/30/99 10:31AM >>>
This looks like something worth benchmarking.  Conventional wisdom among
"experts" I know is that array processing, depending on how many lookups you
need to do and how you do the lookup, can give you a real boost over file
io.  I think the important factors are how many lookups you need to do and
how you do the lookup, but I think it would be interesting to test.  

-----Original Message-----
From: PaulMmn [mailto:PaulMmn@ix.netcom.com] 
Sent: Thursday, April 29, 1999 7:04 PM
To: MIDRANGE-L@midrange.com 
Subject: Re: Dynamic arrays


As an alternative, consider:

-- Create an array with a 'typical' number of records (ie your 'normal
max').

-- Load the array at the start of your program.  If there are more records
than the (hard-coded) length of the file, set an indicator.

-- Set a variable with the number of records loaded into the array (in case
you have to do a sequential search of the array; you'll need to know where
to stop).

-- Lookup using the array.  If found, OK.  If not found, chain to the file
to make sure.

If you load the file with the most-used records first, this will save file
accesses.



As far as saving time by using arrays instead of files---  I dunno.  With
the single-level store, and paging of unused stuff to disk anyway, when
does a disk access become more 'expensive' than a table lookup?



--Paul E Musselman
PaulMmn@ix.netcom.com 




>I have a work file (about 50 records) that is accessed very often by a RPG4
>program. I was thinking about reading the file into an array to make
lookups
>faster. The problem is the file changes in size; sometimes it's 50 records,
>sometimes 60, or 40, etc. I'd like to create a dynamically-sized array at
>runtime. I briefly looked at the ALLOC/DEALLOC/REALLOC opcodes, but they
>really didn't make much sense.
>
>Is there a relatively easy way of creating dynamic arrays in RPG, or am I
>forced to create an arbitrary upper limit?
>
>Thanks,
>Loyd


+---
| This is the Midrange System Mailing List!
| To submit a new message, send your mail to MIDRANGE-L@midrange.com.
| To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com.
| To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: david@midrange.com
+---


As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.