Hi again Dr. Larry

Thinking on this further, what's the best way to set this up ? Or at least,
what have you discovered so far ?

If I read you correctly you are configuring one dedicated shared pool on
the Hosting partition to act as the buffer for ALL NWSDs defined on the
system. Is that correct ?

Is there any benefit in creating a separate dedicated pool per LPAR ? Or
maybe for one heavily used LPAR separate from the other NWSDs ?

Are there any IBM references or documents you are aware of on this subject ?


On Thu, Apr 18, 2013 at 8:11 AM, DrFranken <midrange@xxxxxxxxxxxx> wrote:

When you create an NWSD in IBM i for a guest partition you may select a
shared storage pool to be used for I/O buffering. Using the POOL()
parameter of the NWSD you specify one of *SHRPOOL1-60. That pool must
be specifically configured with ACTLVL(*DATA). Also the pool may NOT be
used by ANY subsystems. The memory assigned to this pool is the I/O
buffer pool. Think of it as the RAID write cache if you will.

It doesn't need to be huge. On the server with 30 client partitions it's
set at 4GB. I usually start with about 2G.

Note this can also be used for iSCSI Clients (Windows and VMWare) as well.

- Larry "DrFranken" Bolhuis

www.frankeni.com
www.iDevCloud.com
www.iInTheCloud.com

On 4/17/2013 3:48 PM, Evan Harris wrote:

Hi Dr

Can you please expand on Rule 6 or point me to some reference links ?




On Thu, Apr 18, 2013 at 4:57 AM, DrFranken <midrange@xxxxxxxxxxxx>
wrote:

I have done both, many times.

My largest IBM i host partition currently has over 30 client LPARs (A
couple Linux, a couple AIX, rest are IBM i.) The host uses 5913 RAID
cards and 24 15K SAS disk units in the host partition. POWER7.

Performance is excellent.
Rule 1) Have enough virtual arms.
Rule 2) Have enough physical arms.
Rule 3) Have enough virtual arms.
Rule 4) Have enough vSCSI links.
Rule 5) Have enough virtual arms.
Rule 6) Configure an I/O Buffer storage pool.
Rule 7) Have enough virtual arms.

I have several customers with VIOS on internal disks hosting IBM i
clients.

1) IMHO Performance is not as good.
2) Maintenance of internal disks in VIOS, well, David would clobber me
for what I really think so I'll just say it's 'sub-optimal.'
3) IBM Service isn't anywhere NEARLY as schooled up on VIOS vs IBM i. In
one case this caused my customer to lose three IBM i client partitions.
Long story, ugly.

I do an entire presentation comparing VIOS and IBM i as a host
partition. You didn't want to hear all that so I'll quit now. :-)


- Larry "DrFranken" Bolhuis

www.frankeni.com
www.iDevCloud.com
www.iInTheCloud.com

On 4/17/2013 12:34 PM, Kirk Goins wrote:

I am curious about the Performance differences, if any, when hosting
IBM
i
on IBM i vs IBM i on VIOS. Are the 2 methods about the same? Wildly
Different?
Do I need more Memory or CPU generally speaking for one vs the other
etc?

I don't need or want a discussion on what features/capabilities of each
method, but just how well the guest partitions perform. Looking for
real
life experiences and any book type references..

Thanks

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.




--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.





As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.