Larry,

I totally understand with high IOPS with SSD.
Any config with SSD will see a vast improvement.
We have large batch processes, hundreds of open files, billions, maybe trillions of IOs.
3 to 4 hour run times.
I'm constantly trying reduce the run time of these lengthy batch processes.
I spend lot of time researching all the disk IO options, for optimal performance, to reduce run times.

Paul


-----Original Message-----
From: DrFranken [mailto:midrange@xxxxxxxxxxxx]
Sent: Wednesday, December 06, 2017 11:45 PM
To: Steinmetz, Paul; 'Midrange Systems Technical Discussion'
Subject: Re: Power7 / Power8 internal disk differences - future disk planning

Yes true and with large numbers of disks we do this especially with spinny disks. Remember that the large cache controllers today will support as many as 96 drives (four 24 drive drawers) and in such as case I would certainly want each RAID card driving half the drives. If that was 4 or 6 or 8 RAID sets yes true.

With the SSDs though the IOPs are silly high, like over 6,000 for one SSD vs 150 for a 15K Spinny (400 to 1!) . So the rules change for SSDs not so much due to getting absolute maximum IOPS but because the numbers are so high they almost don't matter. Since SSDs are more expensive as well saving a drive here or there is easily justified compared to the loss of that one drive's IOPS.

- Larry "DrFranken" Bolhuis

www.Frankeni.com
www.iDevCloud.com - Personal Development IBM i timeshare service.
www.iInTheCloud.com - Commercial IBM i Cloud Hosting.

On 12/6/2017 11:02 PM, Steinmetz, Paul wrote:
Larry,

Referencing V7R3 Disk Management manual.

Performance
Parity sets optimized for performance provide the fastest data access.
The IOA may generate more parity sets with fewer numbers of disk
units. For example, if an IOA had 15 disk units and is optimized for performance, the result might be three parity sets with five disk units each.

When in a dual storage IOA configuration, the system attempts to create an even number of parity sets.
An even number of parity sets distributes the workload evenly between
a pair of adapters which are in a dual storage IOA configuration. This
provides the fastest data access since each adapter has a piece of the workload.

Paul

-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of
Steinmetz, Paul
Sent: Wednesday, December 06, 2017 10:14 PM
To: 'DrFranken'; 'Midrange Systems Technical Discussion'
Subject: RE: Power7 / Power8 internal disk differences - future disk
planning

Larry,

Interesting.
Don't remember from where, but ages ago, I always try for an even number of raid sets.
I think it was for performance reasons, more sets, better performance.
Maybe that rule isn't true any longer.

Paul

-----Original Message-----
From: DrFranken [mailto:midrange@xxxxxxxxxxxx]
Sent: Wednesday, December 06, 2017 10:05 PM
To: Steinmetz, Paul; 'Midrange Systems Technical Discussion'
Subject: Re: Power7 / Power8 internal disk differences - future disk
planning

First I would do one RAID set. Definitely would do the hot spare.

Two raid sets lose two drives for no gain that I can see.

In a Power8 system that will have IBM i accessing the drives likely
prefer the 12 drives over the 8. Remember you'll have one less due to
the hot spare so 8 leaves you with only 7 arms which is just over the
minimum of 6 for IBM i performance. With one RAID set you could drop
to
11 drives and match capacity of 12 with two.

- Larry "DrFranken" Bolhuis

www.Frankeni.com
www.iDevCloud.com - Personal Development IBM i timeshare service.
www.iInTheCloud.com - Commercial IBM i Cloud Hosting.

On 12/6/2017 9:47 PM, Steinmetz, Paul wrote:
Larry,

If you needed about 7tb, would you use
12 #ES8R - 775GB SFF-3 SSD 4k eMLC4 for IBM i or
8 #ES8W - 1.55TB SFF-3 SSD 4k eMLC4 for IBM i

Raid 5 - 2 parity sets, 1 hot spare.

In each case, you lose 3 units, two to raid, 1 for hot spare..

Pros/cons if any.

Paul

-----Original Message-----
From: DrFranken [mailto:midrange@xxxxxxxxxxxx]
Sent: Wednesday, December 06, 2017 5:11 PM
To: Midrange Systems Technical Discussion; Steinmetz, Paul
Subject: Re: Power7 / Power8 internal disk differences - future disk
planning

See in line comments.

- Larry "DrFranken" Bolhuis

www.Frankeni.com
www.iDevCloud.com - Personal Development IBM i timeshare service.
www.iInTheCloud.com - Commercial IBM i Cloud Hosting.

On 12/6/2017 4:38 PM, Steinmetz, Paul wrote:
Larry,

1) My current SSD are ES0H, announced November 2013 - IBM i 7.1 Technology Refresh 7 and IBM i 6.1 Additional Enhancements.

775 GB SFF SSD with eMLC (#ES0E, #ES0F, #ES0G, #ES0H) The new 775 GB SFF SSD, using the same technology as the new higher performance 387 GB disk drive, doubles the capacity that can fit in a single slot. IBM i support is provided for both POWER7 and POWER7+ servers, and for both IBM i 7.1 and IBM i 6.1 with 6.1.1 machine code.

The documentation doesn't always include if they are 5XX or 4K.
How does one determine this?
I'm using FC #5913, so from your previous post below, I'm concluding they are 5XX.

Yes that is correct those are 5xx. Normally if they do not specify they are 5XX if they specify they will say 4K.



2) Will these ES0H disks work on Power8 in an EXP24S, or possibly Power9 with an EXP24S or EXP24SX.

Yes to Power8 in an EXP24S but if they are in the system unit today they will need new sleds to fit in there. Since you are using 5913 I expect they are in SFF-2 sleds today so that will work.

'Likely' to Power9 BUT because that stuff isn't announced yet we cannot be certain.


3) Based on your previous post, (4K Disks they are better because
they cut the number of I/Os by 8) For future planning, we should get rid of the 5XX and plan on using 4K, correct?

Indeed. And you also get a refreshed warranty on the new drives as well so that's a big money saver with SSDs. Despite the lower number of raw I/Os the amount of data moved isn't significantly lower so actual performance gains are minimal. However all disks are moving to 4K Block going forward and I would guess at some point IBM i will drop support for 5XX block disks.


4) With that said, future planning for disks.

Production - for top performance
#ES8D - 775GB SFF-2 SSD 4k eMLC4 for IBM i
#ES8G - 1.55TB SFF-2 SSD 4k eMLC4 for IBM i
#ES8R - 775GB SFF-3 SSD 4k eMLC4 for IBM i
#ES8W - 1.55TB SFF-3 SSD 4k eMLC4 for IBM i

R&D - improvement over current 10K spinny, but using the cheaper SSD.
#ES84 - 931 GB Mainstream SAS 4k SFF-3 SSD for IBM i
#ES8Z - 931 GB Mainstream SAS 4k SFF-2 SSD for IBM i
#ES93 - 1.86 TB Mainstream SAS 4k SFF-3 SSD for IBM i
#ES97 - 1.86 TB Mainstream SAS 4k SFF-2 SSD for IBM i
#ESE2 - 3.72 TB Mainstream SAS 4k SFF-3 SSD for IBM i
#ESE8 - 3.72 TB Mainstream SAS 4k SFF-2 SSD for IBM i

"Improvement over 10K spinny." Yes it will!! There's your candidate
for 'understatement of the year'. :-)


5) However - I did read this link that scares me a bit with the performance of large SSD drives, especially if they are Mainstream.

An interesting question about SSD performance scaling with size


https://www.ibm.com/developerworks/community/blogs/svcstorwize/entry
/
A
n_interesting_question_about_SSD_performance_scaling_with_size?lang=
e
n

An interesting read! The biggest issue I think for IBM i is doing something silly like getting two massive SSDs and mirroring them.
Problem is that IBM i wants arms and even these incredibly fast arms are held back by IBM i and shallow I/O queues. Plus the big drives are more bucks per TB at this point and you will have slots anyway sooooo get a half dozen at least, perhaps 8 and now everyone is happy!





Any thoughts from the group?

Paul


-----Original Message-----
From: DrFranken [mailto:midrange@xxxxxxxxxxxx]
Sent: Monday, December 04, 2017 5:42 PM
To: Midrange Systems Technical Discussion; Steinmetz, Paul
Subject: Re: Power7 / Power8 internal disk differences

Yes and no.

Physically the disks in the Power7 CEC are in slightly deeper and slightly taller sleds than those in Power8 CEC disks. However the disks could be resleded and used in the new system as the actual disks themselves are compatible.

As for the 4K Disks they are better because they cut the number of
I/Os by 8. Now IBM i has used 4K has a block size since the CISCO
to RISC migration years ago but has always had to write 8 520Byte
sectors to handle that. (Whether they only update 1 of the 8 if a
very small update is performed within the block I do not know.)

Tracking what needs to be updated should be simpler with 4K Block disks and they are pretty much the standard in the industry now. With them you can address 8 times the storage that you could with 520 byte sectors using the same number of addresses.

To use 4K Block disks you need PCIe Gen3 RAID cards or newer on
Power Systems. So for example FC #5913 or FC #ESA3 controllers will
NOT recognize 4K Disks. FC #5887 disk drawers work fine with either
4K or
520 byte disks.

Power8 CECs can have 520 or 4K Block disks in them.


- Larry "DrFranken" Bolhuis

www.Frankeni.com<http://www.Frankeni.com>
www.iDevCloud.com<http://www.iDevCloud.com> - Personal Development IBM i timeshare service.
www.iInTheCloud.com<http://www.iInTheCloud.com> - Commercial IBM i Cloud Hosting.

On 12/4/2017 4:47 PM, Steinmetz, Paul wrote:
1) Are Power7 and Power8 internal disks different?

2) What is difference between the 5XX and 4k disks, which is better, and where are they used?

https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/w
i
k
i
/IBM%20i%20Technology%20Updates/page/IBM%20i%20IO%20Support%20Detai
l
s



Thank You
_____
Paul Steinmetz
IBM i Systems Administrator

Pencor Services, Inc.
462 Delaware Ave
Palmerton Pa 18071

610-826-9117 work
610-826-9188 fax
610-349-0913 cell
610-377-6012 home

psteinmetz@xxxxxxxxxx<mailto:psteinmetz@xxxxxxxxxx>
http://www.pencor.com/


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxx for any subscription related questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: http://amzn.to/2dEadiD


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.