Score:-2

HP StorageWorks D2700 slow read speed

bh flag

So the company wanted a file server in our DMZ. It had to be set up rather quickly, we are looking at a proper storage solution for the longer term, but for now I was asked to see what I could do. We had some old StorageWorks disk arrays laying around that used to host our on-premise exchange, with the servers whom had the controllers also present.

So I have two DL360G7's on which I installed server 2019, they have an smart array P411 controller, and they connect with a single SAS cable to the StorageWorks D2700, with each 25x500Gig Disks. The StorageWorks arrays have been configured in exactly the same. Same RAID 5 set with the same settings, same amount of hot spare disks, so 23 disk raid set with 2 hot spares. Same size logical drive (10TB). The controller and caching settings are identical.

The setup is semi redundant, as I use a Robocopy script to mirror the data from the main to the backup at night. The main file server also has Microsoft shadows copies implemented.

Last week one of our developers complained the read speed was rather slow, so I installed some benchmark software (PerformanceTest 10.2) and ran some tests. The read speed of the main setup is about 80MB/s, while the read speed of the backup setup is about 850 MB/s, so that's a factor 10. The write speed is a bit slower on the main but not significantly.

I've tried changing the I/O port, changing the SAS cables, disabling the shadow copies, fiddling with the controller cache ratio and various other settings. Nothing seems to work.

At a certain point I decided to switch the disk arrays on the servers, so server 1 had disk array 2 and visa versa, and the problem migrated to server 2. This leads me to conclude that whatever is wrong, the problem is on the D2700.

However all the disk status leds are green, both I/O ports leds are green, and the power supply leds are green. I've also ran a diagnostic report on the array using HP's ACU, but I did not see anything out of the ordinary.

At this point all I can come up with is powering the array down and re-seating all the disks, I/O ports and powers supplies, as I am out of ideas. Does anybody have any hits or suggestions for me?

Romeo Ninov avatar
in flag
23 disks in RAID (moreover RAID5)? This is crazy, HP recommendations are for MAXIMUM of 14 disks in array. Even RAID6 will not save you from trouble.
Score:6
br flag

The D2700 and DL360 G7 both went end of life over four years ago! Also RAID 5 is seriously dangerous, especially in situations like this, a 23-disk-R5 array is like asking for your data to become corrupt - that's also why your array's performance is bad - even about fifteen years ago when R5 was commonly used you'd see bad performance on those Pxxx controllers when arrays were much over 14 disks.

Rebuild the whole lot using R10 or R60 - you'll see the performance increase and won't be actively willing data loss. Oh and get onto supportable kit soon too.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.