Level: Beginner.
Estimated reading time: 10-15 minutes.
It was two days before Christmas Eve on 2014. I was at home already slowing down for the Holidays when I got the call from a courier. Roughly 15 minutes later I was unboxing the big hard Christmas present and…OH YEAH! Christmas has come early this year — Santa (and maybe HP Finland too) has listened to our humble wishes and sent us bunch of super duper cool stuff like:
…all to be used for demo purposes in PROOFMARK portal!
We already have a few blade enclosures in our demo center and all previous blade generations (from G1) but now we are talking about the meanest and the baddest, state-of-the-art Generation 9’s.
Installing and configuring all that is pretty straight forward. According to the manual at least — practice might be a bit different story. So, I decided to document the whole installation process in the form of a 2-part blog post series. And, want it or not, I will also share some thoughts about the blade server concept in general spiced with my personal opinions.
So, without any further ado — let’s definitely begin!
First, I must praise the packaging for some small but extremely practical solutions for making the boxing/unboxing simple. For example the whole top of the box can be just easily slided upwards without cutting or removing anything. Also, front and rear parts of the bottom of the box can be tilted downwards for easier unboxing with no need to lift the whole enclosure up if you want to take some components out from the bottom of the enclosure.
Bravo! …to whoever designs these boxes.
On the other hand all the styrofoam padding has been extremely…hmm…imaginatively designed, to say the least. I have absolutely no explanation for the numerous edges, corners and pointy ends but I’m sure they have an important purpose. Maybe it’s for aerodynamics (just in case)?
Anyway, I have about one cubic meter package to unbox and to carry to the data center. It is the Holiday season already and I’m alone at the office and since our enclosure weighs about 150 kilos (unboxed) so no matter how much I think of my biceps there’s no way I can do the carrying of the whole enclosure by myself. So, the first thing to do is take as much of the boxing out of the way as possible and dismantle the enclosure to as small pieces as possible.
Thankfully all the blade servers and power modules in the front and all interconnect modules (Virtual Connects) and fans in the rear come easily off. Cannibalized, that will lighten the enclosure itself to only less than 90 kilos which can be further split in half (about 50 and 40 kilos a piece) to make the carrying even easier if needed. I got help carrying the enclosure to the data center so no need for that this time. By the way, a fully populated c7000 enclosure can weigh more than 200 kilos unboxed. So you’ll probably need both your hands lifting it into a rack.
That’s the rear view of the blade enclosure already installed into the rack. From top down we have…
– 5 empty fan slots
– 2 empty interconnect module bays. Interconnect modules are meant for
all kinds of switches and other “data transfer modules”. We’ll talk
about these things later.
– 6 more empty interconnect module (IC)
bays but these ones have the dust covers (blanks) installed. That makes a
total of 8 IC module bays in one c7000 enclosure.
– Empty Onboard
Administrator tray bay. That’s the brains & logic of the blade
enclosure. Seems pretty “lobotomized” at the moment, eh? We’ll change
that shortly.
– 5 more empty fan slots. Making it a maximum of 10 fans in a c7000.
– Finally, 6 single phase power cable connectors to be connected to power outlets.
Oh, by the way, installing the enclosure rack rails using the included rack mount kit is a walk in the park. You don’t even need any tools, screws or anything. Just extend the two-part rails to the correct length and they will snap right in place in any standard rack. Even my mom could do it. Well, not really but you get the picture.
And on the table from left to right:
– 6 power supplies
– 8 brand new Generation 9 blade servers with just a tad over 1TB or
RAM! We will spend a lot more talking about these bad boys later!
– 10 fans
– 2 Virtual Connect Modules and an Onboard Administrator tray (with one OA module).
– 6 power cables
– Some random accessories
There we go, all components on the table and the empty chassis (yes, that’s another name for enclosure) installed in the rack waiting for it to be repopulated with all the stolen fans, power supplies, servers, interconnect modules and Onboard Administrator modules. I’ve rolled my sleeves already so let’s go!
Let’s start with the fans. These 10 Active Cool Fans (as HP calls them) cool down almost the whole enclosure centrally, all components including the servers, interconnect modules, Onboard Administrator modules, internal circuit boards and so on. The only modules in the enclosure that have their own fans are power supplies. This is the whole beauty of blade server concept in general: we have a chassis which is much like a data center, it has four walls, a roof and a floor. Then we install some fans to the chassis to keep the chassis cool (or big cooling units in a data center) and finally we need power supplies to provide power to the whole enclosure. After that we can start carrying all the geeky stuff in!
I remember one marketing slogan that HP used back in the day describing the blade concept: “HP BladeSystem – Data Center in a box” (or something like that). I like that a lot. Because that’s what it basically is!
The design of the Active Cool Fans is said to be inspired by jet engines and they have some 20 patents. All I know is that they are some pretty damn powerful air movers! Anyone who has tried stressing a fully populated c7000 to the limit knows what I’m talking about.
Rear view. So, jet engine inspired, huh? Yep, and they even say that if you look very carefully, you can see a faint Rolls Royce logo printed inside those blowers. Urban legend? Beats me. I’ve never seen them but that doesn’t prove anything.
(My apologies for a bit blurry photo here)
You can actually get your chassis with fewer than 10 fans to save some schillings but if you do, you need to remember a few population rules. The minimum number of fans you need to have is 4 or the Onboard Administrator won’t start. AND with 4 fans you can only use 2 out of 16 blades. So, you have paid for the mighty 16-slot blade chassis but you decided only to use 2 blades? OK…why? I really can’t think of any good reason to go for 4 fans. Nevertheless, if you do, you must populate the fans in bays 4, 5, 9 and 10 so the rightmost bays. That’s because you’d start populating the servers in the front from left to right.
Here you can see the rest of the recommended best practice fan configurations: 6, 8 and 10 fans. With 6 fans you can have one half (to be specific: left half) of the blades running and with 8 or 10 fans all 16 blades can be run simultaneously (the way it’s meant to be).
Using 10 fans also gives you one extra edge in the form of redundancy: you can lose 2 fans and still have the whole enclosure up&running.
Our chassis came with all fans installed, so it’s pretty straight forward to install them in correct bays.
OK, fans are in. Next up, the brains of the chassis: Onboard Administrator (or friendly “OA”) modules.
As mentioned before Onboard Administrator (OA) is the management module of the whole chassis. You can use OA to set the IP addresses of all the components in the chassis, define power modes, boot-up sequences, e-mail notification settings and a ton of other things. You can access OA either thru GUI (web browser), built in LCD display (called Insight Display) or Command Line Interface.
The OA hardware entity consists of a couple of different components: OA tray (in the back), OA module itself (front left) and dust cover (in case you only have one OA module). In most production configurations, you’d always have two OA modules for redundancy (side note: wish I had a redundant pair of brains on some certain mornings) but since our chassis is purely for educational and demonstration purposes, we can manage with one OA module and even tolerate a loss of that.
Actually, the chassis can run without the OA modules completely. Can’t boot without, but if all the OA modules fail while the enclosure is up&running, all the fancy optimization logic is gone, removed, head shot and the enclosure falls into survival mode; it makes all the fans blow at warp speed, doesn’t enforce any power limitations and most importantly, makes all the LED’s go to David Guetta mode. It’s fun to watch. Then, when you reinstall the OA modules everything immediately goes back to (boring) normal.
That’s a close-up of the OA tray. In the middle there’s a couple of standard RJ-45 ports. They are called Enclosure Interlinks and they are used to…well, link enclosures together. This way, when you connect to one of the OA modules you can manage all linked enclosures. Handy! The maximum number of enclosures you can link together is, unfortunately, only 4.
OA module itself. Ports from left to right are:
– iLO port. Used to connect to the OA itself plus all the blade servers’ Integrated Lights-Out management chips. So, no 16 separate network cables (as the situation would be with 16 rack mounted servers) but only one.
– USB port for updating the enclosure firmware, uploading/downloading
configuration and mounting ISO images as optical drives to the blade
servers.
– Serial port. Nuff said? =D Well, not much used anymore.
Mostly due to the fact that for example I’d have to go to some used
computer store to first buy a computer that has a serial port and then
to another used computer store to buy a serial cable.
– VGA port for
KVM (Keyboard Video Mouse) capabilities since the blades themselves
don’t have those ports. Well, actually they kinda do through a special
adapter but that’s cheating. Much like my Macbook Air’s Thunderbolt
port. “Sure, you have all the ports in the world available”, said the
Apple Genius. “Just 49,95€ per port”, the Genius continued.
The OA tray is located just beneath the interconnect modules and takes the whole width of the enclosure. You first need to install the tray and only after it is securely installed, you can install the OA modules. The same goes other way around: you can not remove the OA tray without first removing both of the OA modules.
See those purple handles? You first push the module from the BODY all the way deep into the enclosure and THEN use that handle just to lock the module in place. NOT to push the module in. Approximately 330 service requests saved there. You can thank me later, HP. نصب سرور بلید BLADE SERVER HP سرور بلید BLADE SERVER HP نصب بلید BLADE SERVER HP نصب سرور BLADE SERVER HP نصب سرور بلید SERVER HP نصب سرور بلید BLADE HP نصب سرور بلید HP نصب سرور HP سرور BLADE SERVER HP سرور بلید