Buck,
No, it's still a single AS/400. You mention the obvious ones like physical
failures, but I believe you also have to bring the entire system down if you
bring down the primary partition.

Joe,
Um, maybe my calculator is broken, it is made by Microsoft after all, but 87
hours of downtime is 99% not 90%. 90% is 36.5 DAYS of downtime. 3.65 DAYS is
87 hours and that is 99% uptime. Any admin that has a machine down more that
87 hours in a year is in trouble.

Mike,
In W2K clustering you still loose the active jobs on the node that went
south. Very similar to AS/400 high availability. HOWEVER, given that there
is no such thing as an active job in web land since the client disconnects
after each request you'd really only loose those connections that were
actually running a process at that moment in time. If the user was looking
at their screen deciding whether to buy Cisco or IBM they'd never know the
node went south.

-Walden

-----Original Message-----
From: Buck Calabro [mailto:Buck.Calabro@commsoft.net]
Sent: Wednesday, October 03, 2001 6:05 PM
To: midrange-l@midrange.com
Subject: RE: Interesting Assertions on W2K Stability


>But why are they down?  Is their servers down?  Their ISP connections?
>Hacked DNS?  Human error on updating the pages or configuration?  Just
>because someone like IBM or MSN web sites are down are usually
>not do to the
>servers but some other factor.  I am doughbt that either use a single
>server.  If it is mission critical application that can never
>be down, you
>should have at least 2 servers, two internet connection with
>two ISPs, which
>is impossible.  A load balanced server farm by design would
>give you better
>uptime than any single server solution.  Nothing can be running all the
>time.

This is the way that telephone switches generally work.  There are two
processors (LPAR?) with separate software loads, often at different release
levels.  That way neither a hardware issue nor a software issue can bring
the switch down.

To bring this back on topic for the 400, would an LPARd machine running V5R1
and V4R5 be able to avoid the single point of failure problem, or would you
really need two separate cabinets?  Take destruction of the machine room by
fire, flood or other catastrophe out of the equation for the purposes of
this discussion.

I believe that the number one cause of failure in modern computing systems
is not hardware (cheap or not) but software.  No, I cannot back this up with
published numbers, but I seem to recall far more fixpacks to repair a given
issue than I recall a hardware replacement.  The recent disk drive scare is
an exception to the norm.
  --buck
_______________________________________________
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@midrange.com
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/cgi-bin/listinfo/midrange-l
or email: MIDRANGE-L-request@midrange.com
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.