This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
|
infinibork [2018/08/29 12:51] ss_wiki_admin created |
infinibork [2019/09/16 16:09] (current) |
||
|---|---|---|---|
| Line 6: | Line 6: | ||
| At this stage I figured it could just be a little bug in the logging level or something so I pulled a new firmware down and installed it, hoping for a quick fix, and a free patch in the process. | At this stage I figured it could just be a little bug in the logging level or something so I pulled a new firmware down and installed it, hoping for a quick fix, and a free patch in the process. | ||
| Of course, part of the ILO firmware patch is also a BIOS patch, so the server needs a reboot. | Of course, part of the ILO firmware patch is also a BIOS patch, so the server needs a reboot. | ||
| + | Luckily this particular server is Well Looked After< | ||
| + | Except no - the Infiniband interfaces, which are running IP over Infiniband, are showing as up and have link, but can't send or receive any network traffic. | ||
| + | So I play around with the TCP stack a little - validate configurations have not changed, bring bonded interface up, down, restart network, remove one interface from the bond, try the other, all with no luck. TCP won't go. | ||
| + | Technically firmware is a last resort, especially with something that was working beforehand, but the support notes for the ILOM firmware update have planted it in my mind that any HBAs and HCAs on a system getting a BIOS update might need to be patched at the same time. While I'm still in a patchy mood, I verify the firmware levels as per the vendor website to make sure we're on the latest for the IB HCA. | ||
| + | Then I started with the Infiniband diagnostic tools. | ||
| + | ibdiagnet shows we're pretty much seeing all devices across the IB network. | ||
| + | iblinkinfo shows plenty of active links, except the two on this particular problem server. | ||
| + | ibswitches confirms switch connectivity. | ||
| - | Oracle hwmgmtd | + | ibstat is showing LinkUp on both ib0 and ib1, but I'm worried the interfaces are still in " |
| + | Logging into the web management interface on the Infiniband ILO shows nothing wrong, but confirms the missing lid. All the working interfaces have an lid assigned. I confirm this on another server. | ||
| + | |||
| + | I run ibdiagnet one more time and see an error that catches my eye: | ||
| + | //Missing master SM in the discover fabric// | ||
| + | Now I don't know everything about Infiniband, but I do know from my Exadata patching days that each Infiniband switch has an SM (Subnet Manager) program running, which presumably manages the devices on each Infiniband subnet (you can segment Infiniband switches and devices like is done with VLANs). I recalled from patching the Infiniband switch there were instructions to ensure there is always a SM running on one of the redundant switches, and to make sure there was a master with higher priority. | ||
| + | |||
| + | Long story short, logging into each Infiniband switch and ensuring there was one with a higher SM priority, and then disabling it and re-enabling SM and instantly a lid was assigned for each failing Infiniband interface and IPoIB kicked in and connectivity started working between the servers. Clearly the SM had been hung up and was back in action assigning addresses. It took a few more moments but the Infiniband cards then moved into an " | ||
| + | |||
| + | But back to the ILOM filesystem filling up - the ILOM firmware update didn't seem to have reduced the amount of logging - but I was able to identify the source by paying more attention to the processes on the machine. I've been aware for a long time that kipmi0 runs hot on the CPU on this machine, although with very low priority, but now the repeated ILOM logins suggested that an IPMI-interfacing tool was hanging around and perhaps the IPMI tool was logging in regularly to check for any alerts from the ILOM. On a hunch I shutdown a process I thought it could be, and I was right. //kipmi0// spawns from checks made by the Oracle | ||
| + | |||
| + | Urgh, what a rabbit hole. I surely could have done that better, but I'm glad I inadvertently discovered the Subnet Manager process on the Infiniband network was failing. That could have led to a whole host of other nightmares when we get round to rebooting the Exadata servers. | ||
| + | |||
| + | |||
| + | < | ||
| + | On the management controller, type: | ||
| + | |||
| + | # setsmpriority priority | ||
| + | |||
| + | where priority is 0 (lowest) to 13 (highest). For example: | ||
| + | |||
| + | # setsmpriority 3 | ||
| + | ------------------------------------------------- | ||
| + | OpenSM 3.2.6_20090717 | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | Command Line Arguments: | ||
| + | | ||
| + | | ||
| + | Log File: / | ||
| + | ------------------------------------------------- | ||
| + | # | ||
| + | |||
| + | Restart the Subnet Manager: | ||
| + | |||
| + | # disablesm | ||
| + | Stopping IB Subnet Manager.. | ||
| + | # enablesm | ||
| + | Starting IB Subnet Manager. | ||
| + | # | ||
| + | </ | ||