Joe Pluta wrote:
You're comparing a multi-user multi-purpose server box to a dedicated
controller.  The iSeries IS NOT MEANT TO BE A DEDICATED PROCESSOR.
Whereas, a PC or even better a Linux box is perfect for that, because
the OS sits right on the metal, doesn't have to worry about silly things
like security and interactive users and I/O processors - there's really
no overhead.
No, Joe.  Comparing os/400 and unix/linux/*bsd is comparing a multi-user 
multi-purpose server box to another multi-user multi-purpose server box. 
 Both are machines that handle thousands of simutaneous users.  It is 
an outright lie to say that unix-like machines don't "have to worry 
anout silly things like security and interactive users".  The OS with 
probably the best security history in the world is OpenBSD, and they 
have the facts to prove it.
I can't make myself any clearer than that - comparing performance on a
dedicated PC as a web server to an iSeries, even a lightly loaded
iSeries, is a fundamentally flawed comparison.
No, it isn't.  If you compare web serving on one machine to ODBC 
performance on another then yes, that is flawed.  But if your tests are 
done in a controlled environment and ask both machines to accomplish the 
same goals, then you can make good measurements about each's performance 
 as it relates to those goals and tests.
Therefore I propose that we as a community do some (very) informal 
benchmarks.  There is some discussion on network performance.  Let's 
find out in an informal way is there is some truth to the statements 
that network performance is slow.  These benchmarks should be taken with 
an enormous grain of salt (after all, there are lies, d*** lies, and 
benchmarks) but would nevertheless be insightful (at least we would have 
some good fodder for discussion).  The tests might have these criteria:
1.  a recent version of OS/400 (say V5 and greater)
2.  full disclosure of network settings and topology
3.  dedicated system (and nothing running in batch - like the nightly close)
4.  other systems tested should follow the same requirements above.
Of course this won't measure how well it handles multiple users signed 
on, but in this test we are interested in maximizing network 
performance.  We should require that we recognize that solely network 
perfornce does not determine the "goodness" of a particular machine.
Any thoughts?
James Rich
 
As an Amazon Associate we earn from qualifying purchases.
	
 
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.