There are several ways that this can be implemented.  But to do a meaningful
job of continuously improving performance, I really see no way other than
getting real about using the tools provided by IBM to manage performance, and
that can be damn near a full time job.  Does your CEO want this bad enough to
hire another IT person whose exclusive job will be performance management?
There are a bunch of 3rd party outfits that provide software to capture the
IBM performance data & put them into fancy graphs & charts ... although I
doubt that a non-technical person is going to comprehend their significance.

You might contact JDE user groups ... I not know any URLs ... to ask what
kind of response time seems to be typical in that industry.

One way is to get aggressive about kicking people off the system.
For example, suppose the reason person-A had to wait 2 minutes was because
person-B was updating the same record for a different reason, so you solve
that by going from 250 users on the system at the same time to 10 people at a
time & mandate people working 3 office shifts & weekends.
I do not consider that to be a realistic solution, but I mention it so that
you mention it & people are horrified & eager to accept some other solution
that they might not consider if you had not first suggested the horrible
scenario.

Get your CEO's request in writing & use it to authorize a visit by your local
IBM partners.  What you want from the partners are their pricing estimates to
upgrade your memory & processor speed & what this will cost the company long
term.  For an investment of a few thousand dollars, depending on your AS/400
model, you can have a dramatic increase in overall performance.  Another
thing you want to check is whether you have the right model for your current
usage.

AS/400s are optimized for batch, interactive, web, PC connectivity, etc.
Companies buy AS/400 based on needs of the moment.
Application connections evolve.
Now you might not neccessarily have the best model for your reality.
The time to switch is every 3-5 years when you replace AS/400 with newer
technology anyway, but perhaps the CEO is willing to spend $250,000 or
whatever to do that right now.

A few years ago, my CEO was inundated by complaints about performance, after
I had warned people that a certain upgrade would hurt performance for some
users, but I did not know by how much.  He came to ask me if a faster
communication line to a certain remote site would help their performance.  I
told him that we could not be sure without me doing some analysis that would
take several days ... could I get back to him on this after I do that
analysis?  He accepted that.  A few weeks later I reported back to him with
the results of my performance analysis.

I told him that I had logged the causes of various kinds of slow-downs & that
only 10% of the time was a remote site person kept waiting because the volume
of traffic on the communication line was filled up & more load wanting to
travel.

I told him that the # 1 cause of people being held waiting was contention for
stuff on the hard disk & that this was global, affecting people at all sites.
I explained to him what memory cache is & how more memory for cache can mean
less demand on hard disk & faster access to the data, and that IBM says that
when we have at least 2 successful hits on cache for every 1 miss, that cache
is effective for our applications & that we are getting 7 to 1 ratio meaning
very high re-use of access to same item #s & customer #s & etc. so that
buying more memory would be the # 1 most useful thing to help performance of
all users, including the remote site that interested him.
I showed him pricing on more memory.
He approved the purchase of more memory.

Richard Reeve said
>  I would start by setting the system value(s) relating to machine pool
size,
> max active and a few others.
Ideally you need to change ONE thing & evaluate the impact before you change
another ONE thing, because some changes can be counter productive & need to
be changed back.

Now on the AS/400 that I am managing, I have the whole thing setup for
auto-performance ... it tunes itself.

Mike.Crump writes:

> My thought is that you use another metric for
>  response time.  This assumes you can get someone to buy into the idea that
>  sometimes response time is an application problem or design issue.  I think
>  this is real critical.   What we measure is the 'distribution curve of
>  response times'.  Everyone is going to be slightly different but what we
>  measure is the standard 4 buckets: 0-1 second, 1-2 seconds, 2-4 seconds,
>  4+ seconds.  95% of our transactions fall within sub-second, 98% within 2
>  seconds, and so on.   Right now I forget how we track that but I am pretty
>  sure it is a component of performance tools.

I entirely agree with Mike.
There is only so much you can do to optimize your AS/400 to process the data
efficiently, but when you have data bases with large numbers of dead records
& software that reads through all the dead records to get to the good
records, the obvious solution is to fix the dead records & to prioritize
which software needs adj.

I have been rather aggressive in the last year attacking dead records.
GO DISKTASKS report tells me which are the biggest files.  Some of them NEED
to have all that data, while I am finding some for which our ERP has made no
provision to get rid of records once we are done with them, so we have old
unwanted records going back for years.  Establish software to clean out files
on a regular basis & this means better performance in access to those files &
also healthier disk space.  There is also the topic of garbage in data bases
that can contaminate good data.

I ask my users to let me know when they experience sluggish performance &
whether this seems to be OVERALL anything they do, or if there are specific
programs that seem to be sluggish, when everything else is running fine.

Several months ago I implemented a major modification for data entry.
The end users are now able to do this application using 1/3 as many
keystrokes per transaction.  They love it.  How did I manage that?
We studied the input screen ... which of these fields are never entered,
rarely entered, almost always the same stuff?
We ended up with the cursor skipping some fields, with a command key to get
at the rarely used fields if needed.
We altered the clear screen subroutine so that some screens retained copy of
data keyed in on prior entry ... easy enough for them to field exit a field
if in fact it will be changed.
There was a particular screen they could not use because the data processing
lacked some flexibility they needed.  Fixing that was part of the
modification effort.

The point to be made here is that in the processing of any given transaction,
the end user might have 100 keystrokes, then press enter & the computer does
its thing ... you can do stuff to help the computer do its thing faster, but
we are still constrained by the end user having to do 100 keystrokes per
transaction ... if you can manipulate the screen rules & screen design to cut
that down significantly, then that can have a major impact on the performance
of those users, and also their accuracy.

We also looked at how they know they keyed in the right stuff, and made some
improvements to the visibility of what they had keyed in.

> From: prumschlag@phdinc.com

>
>  I have been asked to come up with a plan so that no user will ever have to
> wait
>  more than 30 seconds for an AS/400 interactive response.  The request
(from
> the
>  company president) was based on a completely out-of-context observation of
> one
>  user who had to wait 2 minutes for a response to one particular screen on
> one
>  occurrence.  The president's intent is good, he just does not know what he
> is
>  asking for.
>
>  Because I don't believe his request can be or should be satisfied as he
> worded
>  it, I am planning to reshape it into an initiative to monitor both average
> and
>  longest response times, set goals (not guarantees) for both, be able to
> explain
>  exceptions, and propose a series of solutions that are most cost
effective.
>  I
>  will report this to him on a monthly basis.  Sounds pretty noble, huh?
>
>  Just for the record, Ops Navigator shows that throughout the day our
average
>  response time is normally under 2 seconds, and often under 1 second.  We
are
>  running JDE World on a 730 dual processor.
>
>  I am sure there are hundreds of ways to approach this (bigger processor,
> more
>  memory, more disks, better management of file sizes, better scheduling of
> batch
>  jobs, LPAR(?),  separate test box, programming changes, yada, yada, yada.).
>
>  Here is my question (finally).  Other than pulling out the Performance
> Tuning
>  manuals, is there a quicker/easier/better way to approach this?  Remember,
> my
>  goal is to develop meaningful performance measures and be able to identify
>  solutions to performance problems.
>
>  Thanks.
>  Phil


MacWheel99@aol.com (Alister Wm Macintyre) (Al Mac)
BPCS 405 CD Manager / Programmer @ Global Wire Technologies Incorporated
http://www.globalwiretechnologies.com = new name same quality wire
engineering company: fax # 812-424-6838


As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.