Hackable Intel and Lenovo Hardware That Went Undetected For 5 Years Won't Ever Be Fixed (arstechnica.com) 62
An anonymous reader quotes a report from Ars Technica: Hardware sold for years by the likes of Intel and Lenovo contains a remotely exploitable vulnerability that will never be fixed. The cause: a supply chain snafu involving an open source software package and hardware from multiple manufacturers that directly or indirectly incorporated it into their products. Researchers from security firm Binarly have confirmed that the lapse has resulted in Intel, Lenovo, and Supermicro shipping server hardware that contains a vulnerability that can be exploited to reveal security-critical information. The researchers, however, went on to warn that any hardware that incorporates certain generations of baseboard management controllers made by Duluth, Georgia-based AMI or Taiwan-based AETN are also affected.
BMCs are tiny computers soldered into the motherboard of servers that allow cloud centers, and sometimes their customers, to streamline the remote management of vast fleets of servers. They enable administrators to remotely reinstall OSes, install and uninstall apps, and control just about every other aspect of the system -- even when it's turned off. BMCs provide what's known in the industry as "lights-out" system management. AMI and AETN are two of several makers of BMCs. For years, BMCs from multiple manufacturers have incorporated vulnerable versions of open source software known as lighttpd. Lighttpd is a fast, lightweight web server that's compatible with various hardware and software platforms. It's used in all kinds of wares, including in embedded devices like BMCs, to allow remote administrators to control servers remotely with HTTP requests. [...] "All these years, [the lighttpd vulnerability] was present inside the firmware and nobody cared to update one of the third-party components used to build this firmware image," Binarly researchers wrote Thursday. "This is another perfect example of inconsistencies in the firmware supply chain. A very outdated third-party component present in the latest version of firmware, creating additional risk for end users. Are there more systems that use the vulnerable version of lighttpd across the industry?"
The vulnerability makes it possible for hackers to identify memory addresses responsible for handling key functions. Operating systems take pains to randomize and conceal these locations so they can't be used in software exploits. By chaining an exploit for the lighttpd vulnerability with a separate vulnerability, hackers could defeat this standard protection, which is known as address space layout randomization. The chaining of two or more exploits has become a common feature of hacking attacks these days as software makers continue to add anti-exploitation protections to their code. Tracking the supply chain for multiple BMCs used in multiple server hardware is difficult. So far, Binarly has identified AMI's MegaRAC BMC as one of the vulnerable BMCs. The security firm has confirmed that the AMI BMC is contained in the Intel Server System M70KLP hardware. Information about BMCs from ATEN or hardware from Lenovo and Supermicro aren't available at the moment. The vulnerability is present in any hardware that uses lighttpd versions 1.4.35, 1.4.45, and 1.4.51. "A potential attacker can exploit this vulnerability in order to read memory of Lighttpd Web Server process," Binarly researchers wrote in an advisory. "This may lead to sensitive data exfiltration, such as memory addresses, which can be used to bypass security mechanisms such as ASLR." Advisories are available here, here, and here.
BMCs are tiny computers soldered into the motherboard of servers that allow cloud centers, and sometimes their customers, to streamline the remote management of vast fleets of servers. They enable administrators to remotely reinstall OSes, install and uninstall apps, and control just about every other aspect of the system -- even when it's turned off. BMCs provide what's known in the industry as "lights-out" system management. AMI and AETN are two of several makers of BMCs. For years, BMCs from multiple manufacturers have incorporated vulnerable versions of open source software known as lighttpd. Lighttpd is a fast, lightweight web server that's compatible with various hardware and software platforms. It's used in all kinds of wares, including in embedded devices like BMCs, to allow remote administrators to control servers remotely with HTTP requests. [...] "All these years, [the lighttpd vulnerability] was present inside the firmware and nobody cared to update one of the third-party components used to build this firmware image," Binarly researchers wrote Thursday. "This is another perfect example of inconsistencies in the firmware supply chain. A very outdated third-party component present in the latest version of firmware, creating additional risk for end users. Are there more systems that use the vulnerable version of lighttpd across the industry?"
The vulnerability makes it possible for hackers to identify memory addresses responsible for handling key functions. Operating systems take pains to randomize and conceal these locations so they can't be used in software exploits. By chaining an exploit for the lighttpd vulnerability with a separate vulnerability, hackers could defeat this standard protection, which is known as address space layout randomization. The chaining of two or more exploits has become a common feature of hacking attacks these days as software makers continue to add anti-exploitation protections to their code. Tracking the supply chain for multiple BMCs used in multiple server hardware is difficult. So far, Binarly has identified AMI's MegaRAC BMC as one of the vulnerable BMCs. The security firm has confirmed that the AMI BMC is contained in the Intel Server System M70KLP hardware. Information about BMCs from ATEN or hardware from Lenovo and Supermicro aren't available at the moment. The vulnerability is present in any hardware that uses lighttpd versions 1.4.35, 1.4.45, and 1.4.51. "A potential attacker can exploit this vulnerability in order to read memory of Lighttpd Web Server process," Binarly researchers wrote in an advisory. "This may lead to sensitive data exfiltration, such as memory addresses, which can be used to bypass security mechanisms such as ASLR." Advisories are available here, here, and here.
MBA stupidity ... (Score:2)
... strikes again!
Re: MBA stupidity ... (Score:5, Insightful)
MegaRAC is popular with vendors because they quick turn some semblance is a BMC for cheap and short time.
Product managers love pouncing on that, despite repeated incidents where MegaRAC has made a mess.
Sure, AMI deserves technical shame, but a lot of MBAs enable them.
Re: (Score:2)
The MBAs didn't write the flawed code but they were the ones that chose to use this particular BMC implementation and chose not to green-light an update for it on the affected hardware that fixes the flaws.
Re: (Score:2)
MegaRAC.. (Score:5, Interesting)
MegaRAC is such just garbage, but it's the default choice for low effort systems. Some places standardized on it to the point of demanding vendors with better BMC stacks to offer a MegaRAC variant, and it baffles me.
Dell, HPE, and nowadays Lenovo (they bought IBM's BMC stack and eventually just went to that) are credible now. Some of the white boxes have gone openbmc, which is at least nicer than megarac, though I have no idea about typical implementation security.
Re: (Score:2)
Re: (Score:2)
This story is specifically about failing to update lighthttpd in BMCs, which are server specific.
Yes, we have even more gnarly Intel ME in the desktop space, but fortunately it's at least disabled for remote access for 99.9% of devices. Yes, UEFI and ME updates owing to Intel and AMD security bugs abound, though to be fair the UEFI bugs were just "normal behavior" in the PC BIOS era, so while UEFI could be better, the vulnerabilities mostly "degrade to legacy BIOS level of security" rather than "worse than
BMCs shouldn't be on the Internet (Score:5, Informative)
They've always been a source of security issues; proper BMC setup should be putting it on a dedicated VLAN because of that.
The real problem though is SuperMicro's penchant for being "helpful" by automatically moving the BMC NIC from the dedicated port to sharing the first on-board NIC with link. I've seen systems with no plans to use the BMC get compromised that way, because SuperMicro also used a default user/password for far too long (as far as I know, they may still do that today).
Re:BMCs shouldn't be on the Internet (Score:5, Informative)
Re: (Score:2)
Is the password truly random and a decent length, or can it be calculated from the interface MAC or something like that?
Years ago an ISP in the UK had that problem. Every router had a default WiFi password that was just the WiFi MAC address trivially transformed.
Re: (Score:1)
I can't attest to the algorithm they're using, but it looks pretty random and covers the full alphabet. It's printed on stickers on the motherboard along with other identifying information like serial number and MAC addresses.
The issue is this is only a minor roadblock. Default passwords are always exactly 10 characters long and all UPPERCASE. Most seriously, it turns out the IPMI protocol makes it possible to extract password hashes from the BMC if you can guess a username. So extracting a hash and turning
Re: (Score:2)
1) VLANs separate layer-2 traffic. They provide no inherent protection against anything.
2) Shared BMC/NIC are not listening to the same traffic. They have separate MACs, and IP prefixes attached to them.
Default creds is a problem, for sure- but the real problem is having your BMC accessible via the public internet without a fucking firewall. And that is ridiculously common.
Whenever a new exploit comes out, we scan our se
Re: (Score:2)
Not if you configure things right. If the switch is correctly configured to tag packets incoming from it's uplink ports with the public VLAN tag (whatever vlan number you designate for that) and you put the BMCs on a DIFFERENT VLAN, they are not visible to the public.
THEN you put the BMCs on a non-routable subnet.
Then you use a well firewalled jump box with access restricted to authorized techs to allow remote access to the BMCs.
There IS a risk there that if you allow the BMCs to share the host's port rathe
Re: (Score:2)
And if it's not an unrouted network they're attached to, a VLAN doesn't limit its access anyway by any virtue of itself.
VLAN separation is not a security feature, it's a management tool, and arguably a mis-tool.
Re: (Score:2)
VLAN can be as secure as seperate switches if done right. If the BMC's packets are tagged VLAN2 and the uplink port is VLAN10, the BMC will never see any packets from the internet even if someone manages to get an upstream router to route the 10 net. Packets from the BMC tagged VLAN2 are not going out the uplink even if the destination MAC address is the gateway. ARP won't happen anyway so the BMC can't even discover the gateway's MAC address.
Re: (Score:2)
If you have a broadcast domain security problem when it comes to public reachability, no VLAN is going to un-save the fail you're responsible for.
The BMC cannot see packets that are not addressed to its MAC address.
The upstream router will not send packets to that address unless there is a route for it.
The upstream router will not receive packets for that address
Re: (Score:2)
Just repeating yourself won't make it true.
Let's say I hack the upstream to router to actually route the 10 net. With properly configured VLANs, the BMC STILL won't see the packets. Belt and suspenders.
Re: (Score:2)
"Hack the upstream router"?
Get the fuck out of here, you fucking poser.
You going to hack "every fucking upstream router" to get "the 10 net" to your machine, and then hack the fucking internet to get it back?
You crawled up the wrong fucking tree, my dude.
I run a national network.
Re: (Score:2)
And I do pentests. I have also seen actual cases where I was getting martians from the mis-configured upstream router in a colo. The martians probably came from another customer in the datacenter. That's how hacks spread from domain to domain though in that case I don't think anything malicious was going on.
In another incident, I found a router that had telnet open reachable through a dial-up. It wouldn't take long to guess a password given that it wasn't logging failed attempts.
Note that even if you only m
Re: (Score:2)
That's not the upstream router being misconfigured, that's some device on some network somewhere sending you packets that you can't possibly hope to respond to.
And that's fine. A packet that you can't respond to is a packet that can't A) brute force you, B) establish a connection with you.
Unauthenticated datagram protocols are a problem, of course, but that's why literally nothing uses them anymore except for single-hope switch multicast protocols.
In another incident, I found a router that had telnet open reachable through a dial-up. It wouldn't take long to guess a password given that it wasn't logging failed attempts.
Yes, it wou
Re: (Score:2)
If martians could come from another customer's network to mine, I have no reason to believe it couldn't go the other way.
The colo manager I contacted about it thought it was anything but normal. The 10 net should have been null routed, of course.
You may be surprised to learn that the little bitty microcontroller most BMCs are based on have significantly less computational power than a 32 core Epyc CPU does...
Our networks aren't the ones that get pwned. It's our customers. You, in this instance, would be one of our customers.
And that is why I would VLAN my uplink off from my management network. I don't trust your router's c
Re: (Score:2)
If martians could come from another customer's network to mine, I have no reason to believe it couldn't go the other way.
Eh?
This is simple to demonstrate.
Computer A on some network has an address of 192.168.1.2.
They send a packet to your publicly accessible machine at 8.8.8.9.
Since the next-hop is a valid, publicly routable address, the entire internet will happily keep that packet moving toward you.
That comes into your network as a martian. You cannot reply to it, because your network cannot get to an RFC1918 address, and source validation is not enforced on the general internet.
You try to reply to 192.168.1.2, and yo
Re: (Score:2)
Cisco has a bad habit of doing proxy arp.
Re: (Score:2)
All Proxy ARP does is make it so that the Cisco will respond to an ARP request for an address that it has in its RIB.
For Proxy ARP to be a problem here, you'd have to have your machine on a private address, and that private address would also have to be configured (and distributed via an IGP) on the ISP network, and vice-versa.
That scenario could lead to two private networks being bridged within an ISP inappropriately.
Of course, since tha
Re: (Score:2)
Not my router, so I need to defend against the screwed up config. I choose to do that through a combination of network setup WRT routing and using VLANS to keep traffic that shouldn't be away from my maintenance net. If there should never be traffic from the uplink port to the maintenance net, just block it just in case. Defense in depth.
As a side note, that's also why I avoid sharing the host port for the BMC once a box is in production. It has been handy figuring out what's wrong when helping hands turn o
Re: (Score:2)
The scenario of:
Bad actor has an RFC1918 address configured by the ISP on their interface, ISP is distributing this prefix via their IGP, and you are using an RFC1918 address on the segment connected to the ISP router, and it is responding to silly ARP requests- is not realistic. It's technically possible, but I'm quite confident it's never really happened.
The worst that has probably happened, is an ISP was using RFC1918 addresses on *their* gear, a
Re: (Score:2)
By virtue of being plugged into the same cable, the BMC is still connected to the internet, yes?
No. A cable merely sends frames. That is not internet access.
You could take precautions to prevent its IP range from outing out, but how many overlook that?
You have it entirely backwards.
Nothing is routed by default.
You have to take anti-precautions to make it insecure in the first place.
Re: (Score:2)
1) Correct, and many people VLAN tag their 'shared port' thinking it isolates it. Except anyone that can manipulate networking to participate on the BMC subnet could also tag their way in.
2) This is correct, *however*, the complaint largely applies to 'shared enabled by default', and oblivious users that plug in their solution without even knowing there's a BMC get exposed because that independent MAC just DHCPs and gets an address. Of course DHCP pool for *publicly routed IP addresses* is an incredibly ba
Re: (Score:2)
1) Correct, and many people VLAN tag their 'shared port' thinking it isolates it. Except anyone that can manipulate networking to participate on the BMC subnet could also tag their way in.
More specifically, the problem is not the layer-2 separation- the problem is the layer-3 availability.
Putting your RAC/BMC on a separate VLAN, but also with public access does fucking nothing.
2) This is correct, *however*, the complaint largely applies to 'shared enabled by default', and oblivious users that plug in their solution without even knowing there's a BMC get exposed because that independent MAC just DHCPs and gets an address. Of course DHCP pool for *publicly routed IP addresses* is an incredibly bad idea, but it has happened, and further even in an 'internal network' you get screwed because of some other infiltration.
Not only an incredibly stupid idea, but also not something that exists in datacenters.
I know. I operate 5 of them.
Of course, one could implement such a thing within their rack, and I can't stop them from being stupid.
I will say none of my BMCs even have a router set. CLI access is plenty for 99.9% of usage, and on the odd case that I need the web browser, ssh dynamic forward to get to the web interfaces. I think BMCs should just not have routers set, there's not enough good reason for it to be able to reach out and you can use ssh to get you close enough to get in. Any alerting solution you should have an entry point on same subnet as BMC if that matters. Vendors offer "call home" but is it really worth the router to get a support ticket open automatically when you'll have to manually deal with the vendor to handle the servicing *anyway*?
Na, fuck call home. It's called hire a competent network guy rather than trying to get your system s
Re: (Score:2)
Not only an incredibly stupid idea, but also not something that exists in datacenters.
I've seen a *lot* of datacenters and *some* of them implementing precisely that brain dead of a concept. One time they didn't realize because they just had radvd running with their public ipv6 addresses, so they wouldn't notice a large use of them. One that had a class A all to themselves used public addresses for *everything*. Your five datacenters unfortunately do not represent all datacenters. Also, some 'tower' or 'edge' devices get deployed in random places by random people, so it's not strictly a
Re: (Score:2)
I've seen a *lot* of datacenters and *some* of them implementing precisely that brain dead of a concept. One time they didn't realize because they just had radvd running with their public ipv6 addresses, so they wouldn't notice a large use of them. One that had a class A all to themselves used public addresses for *everything*. Your five datacenters unfortunately do not represent all datacenters. Also, some 'tower' or 'edge' devices get deployed in random places by random people, so it's not strictly a 'rackmount' problem
I operate 5, and have PoPs in another 4.
I have monitoring nodes in another 10.
None of the large national companies do it.
We certainly don't do it, and we're the largest in our geographical region.
Not sure what you've seen, but it's pretty weird.
I'm quite certain that my experience is more representative than what yours is, but maybe I'm being cocky.
Can you name a DC provider that does this? It'd be a pretty interesting topic next time I'm at a conference.
Use of public addresses is not a problem. Rea
Re: (Score:2)
I'm quite certain that my experience is more representative than what yours is, but maybe I'm being cocky.
I'm in a position to deal usually a half dozen companies at a given time each operating on average 3 or 4 datacenters each, at scales up to about 30,000 servers a site down to some that have like a closet with not even a full rack of equipment. As I said, it's *rare*, but it does happen *and* has been a source of compromised lists when they kicked off a big internet scan a little over ten years ago. I don't know why you would insist that *no one* accidently DHCPs a shared NIC BMC onto a network it shouldn't
Re: (Score:2)
I'm in a position to deal usually a half dozen companies at a given time each operating on average 3 or 4 datacenters each, at scales up to about 30,000 servers a site down to some that have like a closet with not even a full rack of equipment. As I said, it's *rare*, but it does happen *and* has been a source of compromised lists when they kicked off a big internet scan a little over ten years ago. I don't know why you would insist that *no one* accidently DHCPs a shared NIC BMC onto a network it shouldn't belong to, and that some even go so far as those to be routable. It's not about how many *you* operate, it's about the number of different operators. Whether you manage 10 or 100,000, of *course* your policies are going to be consistent. And it may sound 'cheap', but no way I can name and shame companies specifically.
It doesn't sound cheap, it smells of bullshit.
You're making a claim of a practice that I have not seen in 20 years of working nationally in this business. I am at the top echelon of this business. If you're really in it, you have met me in person, likely at a conference.
The issue is not what the NIC BMC operator does, it's that no datacenter in the universe puts a fucking DHCP server on their routed interfaces facing their customers. It's fucking absurd.
Part of the issue were the servers showing up in scans made by completely unrelated, unprivileged people. They were hitting up address ranges and coming up with large numbers of IPMI reacting devices at the time.
Ya, this is because assholes put their IPMI devices
Re: (Score:2)
Re: (Score:2)
I am at the top echelon of this business.
Ok, good for you? Good that you know every person that ever is responsible for setting infrastructure that might possibly have BMC devices in it and can unilaterally declare how they all do and don't do things. Someone who is top echelon but can not handle ssh -D 4343 instead of ssh -L 4343:bmc:443 for some reason.
It's because people are operating equipment they're not qualified to operate.
And guess what? They still count. They still have their crap on the internet. First you say no one does it, *then* declare "yeah they do it, but they are stupid", I never argued that it was smart,
Re: (Score:2)
Ok, good for you? Good that you know every person that ever is responsible for setting infrastructure that might possibly have BMC devices in it and can unilaterally declare how they all do and don't do things. Someone who is top echelon but can not handle ssh -D 4343 instead of ssh -L 4343:bmc:443 for some reason.
You keep missing the point.
I definitely do *not* know everyone who runs a BMC device.
The fact that stupid people operate BMCs is not in question. It's empirically provable with a simple scan I can right now across my networks.
None of those got their addresses via DHCP. They put those addresses on those devices, because they *wanted* them publicly accessible, because they knew no better.
And guess what? They still count. They still have their crap on the internet. First you say no one does it, *then* declare "yeah they do it, but they are stupid", I never argued that it was smart, I argued that it was a risk, and a risk for why 'shared NIC as vendor default" is a risky idea for the oblivious.
I never said no one does that. Read better.
I said no DC operator hands out DHCP on their router customer facing interfa
Re: (Score:2)
Someone who is top echelon but can not handle ssh -D 4343 instead of ssh -L 4343:bmc:443 for some reason.
You know how to ssh, but your only management tool is a web browser? Fascinating ;)
"ssh -D 4343" sets up a local SOCKS5 proxy.
Unless whatever you're using to traverse that tunnel supports SOCKS5, you can't use that tunnel.
That tunnel does make certain things like web browsing across it a much nicer experience.
But what if you need to telnet to the device? snmp query? ssh?
You can't do any of those things without something to proxy attempts aimed at loca
Re: (Score:2)
I call bullshit on the iDRAC as well. I still have access to all versions of the iDRAC from version 6 on and they work just fine with a SOCKS proxy. We retired the R300 with iDRAC5 in 2022, but that's a 2007 machine. We only hung onto it for so long because it was cheap licensing for TSM to backup our GPFS file system. SAS card didn't work with RHEL8 so had to switch to a R330 with Pentium G4600 and that was pandemic delayed. Should have happened in 2020 but meh it was working and limited data centre acces
Re: (Score:2)
https://localhost:8443/cgi-bin... [localhost] - a DRAC5, right now, though a simple ssh -l tunnel. Happy as a clam.
Of course it uses a version of SSL that almost nothing supports anymore, so there is that.
Re: (Score:2)
I said no DC operator hands out DHCP on their router customer facing interfaces.
Note that list includes small businesses, universites, institutes. I'm going to *guess* you are saying all DCs are Colos or Cloud?
Older gen iDRAC that I saw was sensitive about port relocation, but not about proxy, or a name based virtualhost redirection. If you access as 'idrac:9443', at least the idrac I was dealing with threw a fit.
Re: (Score:2)
Note that list includes small businesses, universites, institutes. I'm going to *guess* you are saying all DCs are Colos or Cloud?
Colos, my friend.
I did not mean to imply any group of people with a little room in their offices that has servers in it (hell, every one of our satellite offices has that)
Older gen iDRAC that I saw was sensitive about port relocation, but not about proxy, or a name based virtualhost redirection. If you access as 'idrac:9443', at least the idrac I was dealing with threw a fit.
I just tested on a DRAC5, no problem.
Don't know what you ran into, but if you're using something older than a DRAC5 (IS there something older than a DRAC5?!) then I'll personally donate to their upgrade fund.
Re: (Score:2)
But what if you need to telnet to the device? snmp query? ssh?
Why the hell would you do that through port forwarding instead of just doing it from the SSH target that would provide the proxy? The proxy is only a workaround for the occasionally stupid time a web browser is indicated.
If you are trying to say you have multiple ssh hops and that makes '-D' harder, wouldn't you use ProxyJump instead of manual port forwards (-J)?
Re: (Score:2)
Colos, my friend
So that's my whole damn point. The world isn't exclusively Colos. And those vendors sell into those non-colo datacenters. I will happily admit I've never met a Colo doing that crap, but that's hardly the entire world of purchasers of those devices.
Re: (Score:2)
Should all enterprise hardware focus on making sure some idiot can't do something stupid with it?
That sounds a little too much like the kind of thinking that has made computing itself trash.
Re: (Score:2)
Why the hell would you do that through port forwarding instead of just doing it from the SSH target that would provide the proxy? The proxy is only a workaround for the occasionally stupid time a web browser is indicated.
what? lol.
This is a silly question.
Say you've got a script, this script needs to configure an OLT with 1000 ONTs on it.
Say this script has a big runtime requirement. Do you then transfer that whole fucking thing over to a server, or do you merely proxy the connection?
If you are trying to say you have multiple ssh hops and that makes '-D' harder, wouldn't you use ProxyJump instead of manual port forwards (-J)?
It doesn't make -D harder, it's just pointing out that a SOCKS proxy has *1* application- SOCKS clients, which is very close to limited to 1 class of applications- web browsers.
SSH tunnels have far more utility than just web tunnels.
And y
Re: (Score:2)
Surprise, servers need to be configured correctly to avoid tears.
SM makes the BMC easily reachable on dedicated and shared port by default so you can order one in, have a remote person slap it in the rack and hook up cables and then you can set it up remotely.
I'm not a fan of using the shared port on a publicly reachable LAN since a compromised machine could be turned into an un-authorized jump box to the management VLAN, but there are times when it makes sense on a private network.
Possible solution? (Score:2)
Send out a repair team (Score:2)
Do not buy Chinese laptops (Score:2, Interesting)
Bogus issue - use isolated net for BMC (Score:3)
That advice to isolate SP / BMC network interfaces, and certainly not put them on the public network, still applies today.
I mean, who would allow their SP / BMC to be remotely hackable?
Some computer forums have people asking how to access their SP / BMCs from the internet. Really? But, to be fair to some requesting how to make that happen, they want to use a VPN, which probably helps with the security.
Of course, the bug in BMC firmware today is probably not the only one. Just wait, another will be found. Thus, back to isolated network for SP / BMCs...
And that is why... (Score:2)
You'd really want an open source solution for the remote management hardware.
The closed IPMI standard was replaced by Redfish, which was then implemented for many modern boards in OpenIPMI
https://github.com/openbmc/ope... [github.com]
Unfortunately my older servers were not supported, but they are now mostly offline.
And for bonus points:
https://pikvm.org/ [pikvm.org]
PiKVM can enable you to have (almost) full OOBE management with a secure system, and support the Redfish or other APIs to power cycle the systems (or you can install a r
Re: (Score:2)
This is a bit mixed up...
IPMI and Redfish are effectively similar in governance. With IPMI you had an ad-hoc group of contributors, though Intel owned publishing it. As a random person, your voice was not welcome. With Redfish, only dues paying members of the DMTF have a voice, but even if you pay to be heard, you are unlikely to be listened to. For example, at one point someone said to make VNC based remote video a standard. But some of the vendors charge extra for that, and at least one company must no
Obvious (Score:2)
This has been pretty obvious vulnerability for many years. ... 'secured' by either a default password or a simple 8 character password based on the company name.
Embedded controllers have always had FAR more dangerous power
We all know it has never been updated.
We all know even the chips themselves have little management systems running in them.
These researchers just went to the trouble of proving it.
This is almost IG Nobel worthy.
Fake news (Score:2)
Re: (Score:2)
Yes, Supermicro (along with other MegaRAC based BMC vendors) have had security nightmares.
On the implication of "little extra chips", that Bloomberg article was utter bullshit, that was never substantiated by anyone and at least one of the people they consulted came forward and said the authors were dumb and didn't understand what the hell he was saying (they basically asked him how small a surface mount component could be, he linked to something like a resistor, and then they used that as "photo of the spy
Re: (Score:2)
NO INTERNET!!!!!!!!! (Score:2)
Anyone who connects their BMC to the Internet needs to be defenestrated, then fired. Preferably from a cannon.
We used it to some degree at my last job. It was routed to internal addresses ONLY, and inaccessible from the outside.