+ a gazillion...
;-)))
- sjl
Eric wrote:
BZZZZTTTT!!! Sorry! I'm definitely NOT in your camp on this...
I'm always careful to examine my code for efficiency, but NEVER at the
expense of needless obfuscation and complexity for the programmer. I might
have agreed with your statement about 30 years ago, when IBM midrange
sported about 16K of memory, and processed at Khz speeds... Today,
processing inefficiencies are much less impactful than before. A scarce
assortment of high-performance routines might need to be coded for
operational efficiency, but that impacts the api design NOT AT ALL.
I'd hazard that, since you wrap job number in a couple of days, that your
applications submit MANY small jobs to jobq, for processing in batch. Each
job instance requires significant allocation of resources to start up,
initialize, execute, and end... A potential performance improvement to your
design might be to use a prestart job to service requests from a data queue.
This robust approach allows for several scalability improvements, such as a
queue monitor which will start up additional PJs to service request
backlog... Slick and performance ready!
-Eric DeLong
-----Original Message-----
From: rpg400-l-bounces@xxxxxxxxxxxx [mailto:rpg400-l-bounces@xxxxxxxxxxxx]
On Behalf Of Robert Houts
Sent: Tuesday, August 21, 2012 10:02 PM
To: RPG programming on the IBM i / System i
Subject: RE: Converting Case with a Varying Field
Sorry to disagree with you, as I do greatly respect your opinion, but
processing efficiency ALWAYS trumps programmer efficiency. Any extra (read
unnecessary) code is less efficient. The problem with most programmers
(besides laziness) is that they usually don't look at the big picture, which
is the code running in many jobs and possibly being called millions of times
in a day. That "possibly negligible" amount of extra processing adds up in
a hurry. Our production system does a lot of work (we wrap the job number
every two to three days), so we need to remove inefficiency wherever we can.
It seems to me that however "very very fast" the bound calls are, calling
the API directly is still faster. And that's the bottom line. Teaching
programmers to value programmer efficiency over processing efficiency is a
huge mistake.
-----Original Message-----
From: rpg400-l-bounces@xxxxxxxxxxxx [mailto:rpg400-l-bounces@xxxxxxxxxxxx]
On Behalf Of Barbara Morris
Sent: Tuesday, August 21, 2012 17:25
To: rpg400-l@xxxxxxxxxxxx
Subject: Re: Converting Case with a Varying Field
On 2012/8/21 7:03 PM, Robert Houts wrote:
... It
isn't rocket science and eliminates all overhead of the service
program. I guess that efficient processing is not a concern to you.
The Integrated Language Environment is optimized to make bound calls very
very fast, with the intention that programmers can have many small functions
that call each other rather than flattening out the functions to reduce the
number of calls. So the cost of calling the two wrapper functions could be
negligible compared to the actual upper-casing work.
And I think programmer-efficiency is a factor here. Even if you set up the
copy file so all the other parameters can be used without further setup,
coding the call to the API is still more complex and error-prone than
calling the wrapper.
upper_value = CvtToUpper(some_value);
vs
QlgConvertCase(QlgConvertCase_To_Upper
: some_value
: QlgConvertCase_Temp_Output
: QlgConvertCase_Errcode_Give_Exception
: QlgConvertCase_Job_CCSID);
upper_value = %subst(QlgConvertCase_Temp_Output
: 1 : %len(value));
If I were maintaining this code, I'd much rather see the expressive call to
CvtToUpper than the somewhat obfuscated call to the API. Even though I am
completely familiar with this API, it's not immediately obvious which is the
"real" parameter, some_value.
As an Amazon Associate we earn from qualifying purchases.