With our previous installation project (part 1 & part 2) we installed the complete HP blade server infrastructure with Generation 9 blade servers and Virtual Connect FlexFabric modules up&running in the rack. They have been on display in the PROOFMARK portal ever since and quite a few of you have already seen them in action. Now it is time to add some serious storage to the picture!
First, I would like to have a moment and give a short introduction speech about storage array concept and the vast advantages of having a centralized storage solution instead of so called direct-attached storage or DAS, “normal” hard drives that are installed directly into the server.
Lets imagine a situation with 10 servers, each of which need 1TB of disk capacity. Using the most traditional approach, we would install individual hard disks directly into the servers. For simplicity, let’s use one 1TB hard disk in each of the ten servers. Now, the first and immediate problem is if one of those hard drives breaks down. In the worst case we will lose all the data in that hard drive if we didn’t happen to have a backup copy of the data and even if we did, there will be a loooong service break before we get the new hard drive, operating system, applications, services etc installed.
Then there’s the capacity utilization problem. At the time of acquisition, it is very hard to predict how much disk capacity will be eventually needed. That’s why people tend to buy a bit more disk capacity up-front to make sure they don’t run out of capacity while only few servers will ever use all that capacity. If, for example, one server is using only 500GB of the available 1TB, you’ll pay for the other 500GB for nothing. On the other hand, if that 1TB is not enough for another server, there’s no way to borrow that leftover 500GB from the first server and you’l have no choice but to buy more or bigger hard drives, shut down the server, install the hard drive and boot the server up again.
Performance-wise 10 disks must be better than just 1, right? Correct. Kinda. Only not when they are installed in the different servers. If there’s a very disk intensive task executed on one of the servers, once again that one server can not borrow disk performance from other servers’ disks.
Now, instead of 10 servers imagine the same situation with hundreds or thousands of servers. Would you volunteer administering that? I wouldn’t.
Using storage array approach, we virtually don’t need any local disk storage in the servers. So, no hard drives in the servers at all. Instead we would install, say, five of those 1TB disks in a centralized so called (disk) storage array that will be connected to all 10 servers.
There’s a lot of built-in redundancy in storage arrays so losing one or two disks usually doesn’t matter at all. All you need to do is replace the broken disk at any convenient point while the servers will never skip a beat.
Of the total 5TB we have in the storage array we can initially allocate, say, 200GB for each server. Servers running out of capacity is not a problem either since it is very easy to allocate more capacity with great granularity from the storage array. Allocating less capacity is as easy if needed. There’s also a ton of other features that optimise the capacity utilization like thin provisioning, compression and de-duplication but I’ll leave those for the “advanced course”.
Sophisticated storage arrays can also leverage all disks for all read and write operations by any server so performance is also greatly boosted.
How about administering hundreds or thousands of servers using storage arrays. No problem, show me to my station!
And THAT, ladies and gentlemen, was a short introduction.
Now, let’s get down to business! What we have today is two brand new HP 3PAR StoreServ 7200c storage arrays.
The delivery arrived to our doorstep according to the plan, three boxes nicely stacked up containing the two HP 3PAR StoreServ 7200c’s and some installation media in the small box. Let’s begin with the boxing…
Ahem, nuff said? Picture is worth a thousand words.
“Worried” is the word I would use describing my feelings when I saw that box with an electronics delivery that’s worth a fortune. Thankfully, though, it was just the software installation DVD’s in that box and all of them survived the “Ace Ventura delivery” withouth a scratch.
Boxing and wrapping was once again first class as always with HP stuff. This is the first sneak peak to one of the HP 3PAR StoreServ 7200c arrays we’ve got.
And here’s the rear view. Carton boxing done extremely well in the rear side as well protecting the Power Cooling Modules (PCMs) on the left and right side. As mentioned in the beginning storage arrays have a lot of redundancy by default. One example of that is shown here with the two storage controller units (or two nodes or a node pair) in the middle. There are two of them basically just in case one of them blows up. Of course normal every day operation is smoother with all the nodes fully functional but 3PAR has no problem working only with one up&running.
So this is what we’ve got with the delivery (from bottom to up, left to right):
– 1 x HP 3PAR StoreServ 7200c storage array.
– 2 x standard rack mount rails. We’ll use these to, well, mount the
3PAR to the rack. The rack mount rails are included as a standard with
all storage arrays.
– Random accessories in the couple of plastic bags and a couple of power cords.
– And finally, in the back there is the pile of extra tough software installation media DVDs.
…and the whole thing times two, of course.
There it is. One HP 3PAR StoreServ 7200c storage array. 2U in height populated with eight 900GB 10k SAS disks.
As I’m sure we all remember, in 2010 HP acquired a relatively small yet highly respected California based storage company, 3PAR, for a whopping $2.35 billion after a long and vigorous bidding competition with Dell. Back then Dell had to settle for another promising storage box, Compellent, which Dell has managed to shape into a serious storage solution over the years. In all that, Dell saved some $1.4 billion compared to HP so there’s a little balm for wounds.
But in my opinion acquisition by HP was the best thing ever happened to 3PAR. It has come a long way in 5 years time, much longer, I dare say, it would have ever come by itself. Almost all other HP enterprise product names have drastically changed on the way but HP wanted to maintain 3PAR in the name of their storage portfolio crown jewel. Tells you something about the strength and respect of the 3PAR brand.
There are currently two 3PAR series:
– 3PAR 7000 series for mid-market and
– 3PAR 10000 series for enterprises
Both series have several different models that differ mainly in performance, scalability and redundancy. And oh yeah, price.
One of the very cool things about 3PAR storage family is that all the features that the flagship HP 3PAR StoreServ 10800 has are found from the very entry level 3PAR 7200 as well. This might not be absolutely unique on the storage market but it definitely is not common.
Our models are brand new HP 3PAR StoreServ 7200c. The “c” in the end of the model number means that in addition to traditional so called block persona this new 7200c also has file persona. In other words: while 7200c can, naturally, serve normal block storage capacity where you can install your operating system and stuff, it also has File Services for storing files.
All this from a common storage pool so no capacity is wasted. Convenient.
A maximum of 24 disks can be installed in one controller enclosure but this capacity can be further expanded by installing a maximum of 9 additional drive enclosures. Hence, the maximum number of disk drives in a 7200c configuration is 240 and maximum raw capacity is roughly 500TB. Usable capacity varies greatly depending on the redundancy levels and other settings. As for comparison, the meanest HP 3PAR 10800, has a maximum raw capacity of over 3 PB. That P is for Peta. Which is a lot. A lot.
Our configuration has a mere 8 x 900GB 10K SAS drives. That’s 7,2TB of raw capacity. Not much but fills it’s demo purpose brilliantly.
Rear View. Here we have four major components: First, the two 764W Gold Series Power Cooling Modules or PCMs on the left and right. These modules are responsible for keeping the controller node enclosure cooled and powered.
In the middle there’s the node pair. In all 3PAR arrays there’s a minimum of two controller nodes, which are the brain of the storage array. They are responsible for all the data access logic, compression, thin provisioning, zero detection calculations and other fancy stuff.
Here’s a close-up picture of a 3PAR 7200c node. On the left we have a RJ-45 port with label “Mfg” next to it. This port is meant for accessing the 3PAR using the provided serial cable, should you need it. On the top there’s the one PCI expansion slot for additional host ports. Below the PCI expansion slot there are a couple of so called interconnet ports that are used for externally connecting 3PAR controller nodes together. Since the maximum number of controller nodes in our 3PAR 7200c is two (that are automatically internally interconnected) those ports will never be used in our configuration. No…I don’t know “why they are there then”.
In the top middle, there’s the lock handle for securing the node in place. Below it, the two device ports (DP-1 and DP-2) for additional drive enclosures.
On the right hand side there are four ports: the left ones (FC SFPs) are used for connecting 3PAR to the SAN fabric or directly to VC FlexFabric modules, which is called Direct Attach mode. For you to brag about. The two rightmost RJ-45 ports are for normal management access (Mgmt) and remote copy (RC-1).
I wanted to give you a shot of the famous 3PAR ASIC but decided not to blow the whole thing apart so you’ll have to settle for a generic picture of a node’s insides like this. What’s so awesome about the 3PAR ASIC then, you ask? Well, unlike vast majority of other storage arrays out there, in addition to the one main CPU that handles all the important data transferring, 3PAR has another chip called “3PAR ASIC” that handles all the secondary operations like thin provisioning, compression, zero detection and de-duplication. This way, no matter how much of those less important operations are on-going, production performance is never degraded. The 3PAR ASIC is actually one of the biggest edges of 3PAR over the competition.
…and then there’s a bunch of other components in there, too. I guess.
A rear view of the node. Look at that!
And there are the two identical 3PAR 7200c’s. With the two arrays we can have some fun like remotecopying and peermotioning. Both features are moving data between multiple arrays to achieve data protection or load-balancing.
With the delivery there comes a big bunch of software installation media. This stack of DVDs reminds me of one of the less fascinating sides of 3PAR: the licensing mess. Right after the acquisition, about 4-5 years ago, there was about 30 different licensable features, some of them per array, some per disk. And then there were/are the caps (maximums) to the licenses. Compare that to for example HP StoreVirtual storage array that has an all-inclusive licensing model: one price, all features available.
OK, in all fairness, it has gotten better (a lot better actually) with HP bundling many of those licenses as suites and adding more features to the mandatory Base OS Suite.
Everybody who’s installed something into a rack knows that online manual and heavy machinery is all you need. And brute force.
Voilá! Both of the HP 3PAR StoreServ 7200c’s have found their way into the rack right above the first generation D2D backup devices. Installing 3PARs in a rack really is a walk in the park, just install the included rack rails, mount the 3PAR in them and finally secure the array by tightening the screws in the front.
Rear view of the ready installed 3PARs. All four PCMs are powering & cooling the arrays, management cables (blue) are plugged in and finally the FC cables (orange) connected to the hosts. For those of you who noticed: yes, we did do some re-cabling after this picture was shot and moved one of the FC cables from node 1 to node 0 for a balanced and fault-tolerant configuration.
While I’m not going to go into any details of 3PAR management with this post, I have to mention this one thing. Just a short while back HP released this 3PAR management interface called StoreServ Management Console or SSMC (top picture). The look&feel matches HP’s generic one-stop-shop data center management tool, HP OneView, and it is just pure joy to use compared to the old graphical management interface, 3PAR Management Console or IMC (bottom left). Besides the stunning visual look, one big advantage is that SSMC is a pure web application so you can use it with your browser in contrast with a Windows application what 3PAR IMC is. And of course there’s still the good ol’ 3PAR Command Line Interface (bottom right) for heavy users.
By the way, you can find all of the mentioned management tools (3PAR SSMC, 3PAR IMC, 3PAR CLI and OneView) from PROOFMARK portal if you care for a test drive.
That’s all folks! Previously we installed the state-of-the-art blade server infrastructure with a lot of fancy features and power to mess with and now we have the latest storage magic to complete the picture. Next up will be setting them up for each other. I think we’ll go for the clean Direct Attach alternative and ditch the SAN fabric from the middle saving both in costs and configuration time.
Every now and then I hear hardware guys picking on each other about which is more important: servers or storage. Server guys say storage is just server accessory whereas storage guys consider servers as front-end for storage. Then comes the virtualization guy crash the party saying that none of it matters as long as both exist because he’ll render all of their abilities useless with a hypervisor. But the title goes to the cloud guy who drops the bomb: “There’s no need for hardware or software since all my stuff is in the Cloud.”. Checkmate. T.K.O.
Well, we already have state-of-the-art blade servers with tons of power, the magical Virtual Connect domain and top-notch utility storage in addition to plenty of different hypervisors (VMware ESX, Microsoft Hyper-V, Citrix XenServer and KVM) running so…I’m thinking…why not put the cloud guy’s words to a test by installing and setting up HP Helion on top of all that. Yeah, that’s what we’ll do next!
Thanks for reading, stay tuned & see you!