Level: Beginner.
Estimated reading time: 10 minutes.
This is the second and final part of our HP BladeSystem installation project. With the first part we unboxed the blade enclosure, installed it into the rack and populated it with fans, Onboard Administrator modules, Virtual Connects and power supplies. The only tiny thing left are the blade servers themselves.
So, here we go again!
First, a little background story to get us in the mood. Traditional servers are pretty much like any other desktop computers: they have a motherboard, a processor, some memory and hard drive capacity, a power supply, fans, a lot of different kinds of fancy ports and so on. They can be installed directly into a rack or they can be of tower form-factor standing on a floor or a desk. The difference to normal desktop computers is that servers concentrate more on staying powered on, processor speed and memory capacity and less on cheapness and Counter Strike performance.
A quick question: how much memory do you have on your desktop/laptop computer? 4 gigs? 8 gigs? Even 16 gigs? Impressive. Well, a modern standard server can handle 3 072 gigs or even more. There’s one difference.
In many ways, blades are like traditional servers but there are many differences too. Of course blades too want to be as fault-tolerant as possible and provide as much computing capacity as possible. This is no news. But the big difference to traditional servers is in the overall architecture design. Blades don’t have power supplies, they don’t have fans and they basically have no ports to plug any cables directly to. Pretty crappy, eh? Quite on the contrary. All that “suporting IT infrastructure” is located at the enclosure level. Remember “Data center in a box”? All the power supplies and fans are installed into the enclosure. Physical ethernet or FC cables are plugged in the interconnect module ports. IC modules themselves are installed, again, in to the enclosure. Then, the blades are installed into the enclosure automatically connecting to power supplies and IC modules while enjoying the overall cooling of the Active Cool Fans.
And it gets better. If you are using Virtual Connect modules (with boot-from-SAN) and one server blows up, all you need to do is replace the server with a fresh one and all the services will start blinking again in minutes. Meanwhile, VC would most probably have failed over to some other server resource for maintenance so most probably your tweets reached the destination without any delay.
All this is only possible with blades. That’s how cool they are.
This is a HP ProLiant BL460c Gen9 blade server. Independent of the generation, BL460c is and has always been the best selling blade server in HP BladeSystem portfolio and still holds the crown for the best selling blade server in the world. Dimensions are about: 18cm x 6cm x 52cm (H x W x D) and depending on the configuration weight is from 5 to 6.5 kilos.
Now, let’s get this baby wide opened!
First thing an experienced field engineer would grasp onto after opening the main lid is the T10 torx screw driver located in the bottom side if the lid (as shown in the above pictures) or on top of the plastic memory cover. That screw driver is used to remove all the components like processor and mezzanine cards. But wait, WHAT?! There isn’t one in the new Gen9’s!
Well, good news is that the T10 screw driver used in Gen9’s is the same as in the older blades so I went to go get one from an older generation 5 server (pictured above) and used that for disassembling my Gen9. Maybe HP’s thinking is that throughout the years they have spread enough of those T10’s around the globe and they decided to stop. Or maybe the HP engineers are just too clever for me and have hidden it somewhere very safe out of my sight. If so, please somebody enlighten me. I miss my T10.
There it is now — in all it’s beauty. Visible components from front (bottom of the picture) to the rear:
– Hard disk bays in the middle. We don’t have any hard disks since we
are going to use boot-from-san from HP 3PAR storage arrays. For that
reason you don’t see the Smart Array Controller (HP’s RAID controller)
there either.
– Internal USB port on the bottom right. Used for example for OEM VMware installation.
– Processor #2 with 2 x 4 DIMM sockets on each side. Remember! You need
to have both processors installed in the modern servers if you want to
utilize all the memory capacity and both mezzanine cards.
– Processor #1 with another 8 DIMM sockets, totaling 16 DIMM sockets.
Using 32 GB DIMM’s, the total amount of memory in a BL460c Gen9 is
512GB.
– 2 x mezzanine card slots. We have one 2-port 16Gbit FC adapter installed there + one empty mezzanine slot.
– 2-port FlexLOM adapater on the top right of the picture. These are the
default built-in adapters that need to match the installed interconnect
modules in IC bays 1 & 2.
– Signal midplane connector.
Internal view on the blade server with some of the components disassembled. The DIMM’s are a bit more visible now since I removed the plastic covers on top of them. You can also see the removed mezzanine adapter above the blade there as well as the FlexLOM adapter on the bottom.
Then…there is this…one…thing on the right of the picture. The closest official name I have for it is “contraption”. As far as I know, it supports the two mezzanine cards and that’s it. It is kind of a twisted piece of some lightweight plastic-kind material with no real purpose. A true example of what an engineer’s mind is capable of.
HP iLO (Integrated Lights-Out) is the most important management component of a HP ProLiant server – a small full-blown computer installed on the motherboard of a HP server. It has its own processor, memory, network port (with a dedicated IP address), graphics adapter and even a hardware firewall. You can remotely connect to the host server using iLO, mount ISO images from your (remote) laptop to the server as a DVD drive, power the server on/off and much more. Even if you can’t (for some reason) access the host server itself you most probably still have access to iLO and you can for example boot the host server thru iLO.
Back in the Compaq days, there used to be, not iLO but, RILOE (Remote Insight Light-Out Edition), a PCI card that even had a separate power outlet so the host server could literally be out of power and there was still a change that RILOE was accessible. Well, we haven’t had that guy around for about a decade since HP discontinued RILOE after the Compaq acquisition and gradually moved to iLO architecture. The current iLO generation is iLO 4.
“8 times 32 gigs of RAM, equals to 256 gigs of fun.”
— Markus Leinonen (2015)
There’s the rear of the BL460c Gen9 server. In the middle there’s the 100-pin light grey midplane connector that connects the blade server iLO and other adapters’ ports through the signal midplane to rear side physical ports and out the enclosure.
Behind the midplane connector there are 2 mezzanine slots (dark grey) and to the the iLO.
On the left hand side, there’s the FlexLOM card that houses the two default network ports. FlexLOM type can be selected and even left out of the configuration completely.
Now, onto the installation itself. On the picture above you can see some of the empty blade bays in the enclosure. The “normal” form-factor of the blades is so called “half-height” and they consume one slot in the enclosure and the maximum number of half-heights in an enclosure is 16. Taller full-height blades consume two blade bays so you can only fit a maximum of eight full-heights into a c7000 enclosure.
By the way, you can also mix and match form-factors but, once again, there are some rules to be followed.
To install full-height blades you’d need to remove the special divider shelf for the taller blade to fit in the enclosure. In the above picture the divider shelf has been removed to make room for full-height blades to be installed on bays 3 and 4.
Since all our blades are half-heights (BL460c), I put the shelf back in place. This was just a demo. Just for you.
This internal view to a blade bay nicely reveals the blade servers’ new home. Still remember the 100-pin signal midplane connector in the blade? Those two light grey square connectors in the middle of the pictures are where those connect to. And yes, that green board is the signal midplane there. A passive circuit board that handles all the signaling coming in, going out and within the enclosure. So, pretty important piece of the puzzle, I would say. In other words, don’t touch it!
Since this whole configuration came to us factory installed, all the blades have these nice bay numbering labels so installing them back into correct slots is an easy task. If you have an empty enclosure and the blades then you’d start populating the enclosure from left to right and applying the provided numbering labels is left for you to do as well.
So, that’s almost one blade in. The installation procedure is the same as with OA and IC modules: first from the body all the way in. The blade servers usually are a bit tight to put in place so a bit more force (that’s a bit more force, not a lot more violence) may need to be used. When it’s fully connected, secure the blade in place by locking the latch at the bottom.
Ahh…all the blades and other components installed, just waiting for the power to be applied.
One final touch before powering it all up: plugging in the power cords. For that we first must install the power cord holders in place. Although it might not seem like much, this is actually a pretty important step that you should not forget or neglect. I’ve learned that the hard way a couple of times when installing something else into the rack and accidentally touched some power cord and unplugged it. Of course no device in the data center should be dependant of only one power cord so thankfully didn’t shut anything down but still, doesn’t lessen the importance of making sure the power cords stay plugged in.
And here we go! Power cords plugged in and power has been applied to the enclosure. All green, all good.
One thing worth mentioning is that blade server enclosure is not really meant to be shut down in the end of the office day so there’s no need for a main power button. Once the power cords are plugged in, the enclosure instantly powers on.
I would recommend turning the PDU (Power Distribution Unit) circuit breakers off for the cabling and when all the cables are connected, then turning the circuit breakers back on one by one.
All blade servers and power supplies up&running and happy green.
And finally, there’s the Insight Display, that little LCD display in the bottom front of the enclosure. You can do quite a few tricks using the Insight Display and the push buttons on the right hand side but all you really need to do is provide a management IP address for the OA after you’ve got it powered up and start managing the enclosure from a web GUI, which is much more enjoyable.
That’s about it. Easy as pie. From now on it’s all about configuring the Onboard Administrator, setting up all the IP addresses, installing operating systems and you know, the usual stuff.
Thanks for reading the HP BladeSystem installation series. Until
برچسب ها:BLADE SERVER HP بلید اچ پی BLADE SERVER HP سرور بلید اچ پی BLADE SERVER HP قیمت اچ پی BLADE SERVER HP قیمت بلید اچ پی BLADE SERVER HP قیمت سرور اچ پی BLADE HP قیمت سرور اچ پی BLADE SERVER HP قیمت سرور بلید BLADE HP قیمت سرور بلید BLADE SERVER HP قیمت سرور بلید اچ پی قیمت سرور بلید اچ پی BLADE HP قیمت سرور بلید اچ پی BLADE SERVER قیمت سرور بلید اچ پی BLADE SERVER HP قیمت سرور بلید اچ پی SERVER HP قیمت سرور بلید