Google and Intel Warn of High-Severity Bluetooth Security Bug In Linux (arstechnica.com) 41
An anonymous reader quotes a report from Ars Technica: Google and Intel are warning of a high-severity Bluetooth flaw in all but the most recent version of the Linux Kernel. While a Google researcher said the bug allows seamless code execution by attackers within Bluetooth range, Intel is characterizing the flaw as providing an escalation of privileges or the disclosure of information. The flaw resides in BlueZ, the software stack that by default implements all Bluetooth core protocols and layers for Linux. Besides Linux laptops, it's used in many consumer or industrial Internet-of-things devices. It works with Linux versions 2.4.6 and later. So far, little is known about BleedingTooth, the name given by Google engineer Andy Nguyen, who said that a blog post will be published "soon." A Twitter thread and a YouTube video provide the most detail and give the impression that the bug provides a reliable way for nearby attackers to execute malicious code of their choice on vulnerable Linux devices that use BlueZ for Bluetooth.
Intel, meanwhile, has issued this bare-bones advisory that categorizes the flaw as privilege-escalation or information-disclosure vulnerability. The advisory assigned a severity score of 8.3 out of a possible 10 to CVE-2020-12351, one of three distinct bugs that comprise BleedingTooth. "Potential security vulnerabilities in BlueZ may allow escalation of privilege or information disclosure," the advisory states. "BlueZ is releasing Linux kernel fixes to address these potential vulnerabilities." Intel, which is a primary contributor to the BlueZ open source project, said that the most effective way to patch the vulnerabilities is to update to Linux kernel version 5.9, which was published on Sunday. Those who can't upgrade to version 5.9 can install a series of kernel patches the advisory links to. Maintainers of BlueZ didn't immediately respond to emails asking for additional details about this vulnerability. Ars Technica points out that since BleedingTooth requires proximity to a vulnerable device, there's not much reason for people to worry about this vulnerability. "It also requires highly specialized knowledge and works on only a tiny fraction of the world's Bluetooth devices," it adds.
Intel, meanwhile, has issued this bare-bones advisory that categorizes the flaw as privilege-escalation or information-disclosure vulnerability. The advisory assigned a severity score of 8.3 out of a possible 10 to CVE-2020-12351, one of three distinct bugs that comprise BleedingTooth. "Potential security vulnerabilities in BlueZ may allow escalation of privilege or information disclosure," the advisory states. "BlueZ is releasing Linux kernel fixes to address these potential vulnerabilities." Intel, which is a primary contributor to the BlueZ open source project, said that the most effective way to patch the vulnerabilities is to update to Linux kernel version 5.9, which was published on Sunday. Those who can't upgrade to version 5.9 can install a series of kernel patches the advisory links to. Maintainers of BlueZ didn't immediately respond to emails asking for additional details about this vulnerability. Ars Technica points out that since BleedingTooth requires proximity to a vulnerable device, there's not much reason for people to worry about this vulnerability. "It also requires highly specialized knowledge and works on only a tiny fraction of the world's Bluetooth devices," it adds.
Finally, a use for systemd (Score:4, Funny)
sudo systemctl stop bluetooth
Re: (Score:3)
Re: (Score:3)
sudo systemctl disable bluetooth
The stop command will only last until the next reboot.
Re:Finally, a use for systemd (Score:5, Funny)
It's always a good idea to test a temporary action before jumping into a permanent one. ... especially if you're typing on a bluetooth keyboard.
Re: (Score:2)
The stop command will only last until the next reboot.
$ uptime
08:58:34 up 622 days, 23:01, 1 user, load average: 0.00, 0.00, 0.00
Re: (Score:3)
Typed that, now my keyboard and mouse no longer work. Do you have any solution because I heard that Linux machines don't get turned off and on like a windows machines do... ;-)
Guess that is why (Score:3)
I normally use wired everything. But I am starting to play with Bluetooth on a pi so this maybe an issue.
Bluetooth is a Mess on Every Platform (Score:1)
Re: (Score:2)
I'd rather just plug in a cable to transfer files, really. If I have to do it over a wireless network, it'd be 802.11something.
Re: (Score:2)
> And now for the rest of the mess?
The news from a couple weeks ago is that Bluetooth's pairing mechanism is irreparably broken when Bluetooth LE is available.
Some researchers did the work to run it through a formal verifier, and bad news came out the other end.
Whoopsie, but also this is a recurring and very dangerous problem. Bluetooth and WiFi are done in secret by an industry consortium and they're always broken (WiFI 6 is insecure out of the gate - wait for the update). The IETF and RFC process is
shite (Score:2, Troll)
Re: (Score:2)
Re: (Score:2)
Spoken like someone who hasn't used a modern bluetooth audio device. Yes, if you use the lowest-common-denominator SBC codec you get bitcrushed garbage, just like in the early days of low-bitrate MP3. Did you try to tell the world that MP3 was useless and should be set on fire then, and someone subsequently told you to turn on VBR transcoding and all of a sudden it wasn't artifacted and terrible?
This is the same thing. Try using AAC or AptX encoding over the bluetooth link which you can find in most stuf
Risk assessment requires two variables (Score:4, Informative)
I'm always slightly annoyed with those articles about "severe bugs" found in this or that software stack. To do a proper risk analysis [advisera.com], you need to know at least the impact and probability of occurrence. Sensationalist articles always ever report on severity and nothing else.
In this case, the risk is low.
Re:Risk assessment requires two variables (Score:5, Informative)
As you may know, the CVSS score includes likelihood of exploit. Somebody with knowledge of the vulnerabilty scored it on CVSS and it got an 8.3 (of 10). That's pretty high.
A CVSS 10 would be last month's Active Directory vulnerability that allows just anyone to become domain admin and completely destroy your entire company. So 8.3 is pretty high, if that rating is accurate.
"If" is a big word here because very little information has been released at this point. I'll be curious to see what comes out in the next few days.
Re: (Score:3, Insightful)
Re: (Score:1)
> Lastly, the risk is high, because tacked on extensions are usually hurried to market with inadequate testing.
It's exacerbated by object oriented programming, which programmers are discouraged or even explicitly discouraged from reviewing the other layers of the stack for violations of their published API or for the mismatches between the specification and the actual implementation. Extremely narrow focus on very limited tasks can be productive, but the person or people who understand interactions ofte
Re: (Score:3, Informative)
It's exacerbated by object oriented programming, which programmers are discouraged or even explicitly discouraged from reviewing the other layers of the stack for violations of their published API or for the mismatches between the specification and the actual implementation.
What? In what respect does OOP differ from any other programming paradigm when it comes to using other layers / libraries? Regardless if I am using C, assembler, C++, C# or F# I expect that functions / APIs / libraries / OS calls fulfil their promise. If I had to verify the whole stack including glibc and kernel each time I wrote code I wouldn't get much actual coding done. The BT code in question is written in C (not object oriented) and the bug is due unitialized stack variables and making a callback wit
Re:Risk assessment requires two variables (Score:4, Insightful)
> In what respect does OOP differ from any other programming paradigm when it comes to using other layers / libraries
It's the insistence that there is no point to looking at other layers, and the active punishment of programmers or students who dare to look at the other layers. It encourages unit testing that tests only the one desired new feature, and doesn't test the interaction with the rest of the stack.
Re: (Score:2)
> In what respect does OOP differ from any other programming paradigm when it comes to using other layers / libraries
It's the insistence that there is no point to looking at other layers, and the active punishment of programmers or students who dare to look at the other layers. It encourages unit testing that tests only the one desired new feature, and doesn't test the interaction with the rest of the stack.
This is not OOP's problem. This is C++/Java's problem. C++ doesn't really qualify as an OOPL.
Re: (Score:2)
Re: (Score:2)
That sounds more like a management/cultural issue. There isn't anything in OOP that either encourages or discourages looking past the API of the layer you are interacting with.
You can just as easily just assume the functions you are calling do what they say they do correctly with procedural or functional programming styles.
Re: (Score:2)
> You can just as easily just assume the functions you are calling do what they say
That is _precisely_ the problem, I've been running into the approach since "object oriented" programming became popular, the developers and programmers are taught, sometimes even forcefully, not to look past the nearet level of abstraction.
Re: (Score:2)
But that's not an OOP problem. It happens for functional and procedural programming as well.
Re:Risk assessment requires two variables (Score:5, Informative)
Re: (Score:2)
And every compiler has an uninitialized variable warning in it in C. Usually along the lines of "variable might be used uninitialized here". Granted, that's all the
Re: (Score:2)
We don't have enough detail to make our own risk assessment so can only take Google and Intel's word for it. From what they have released it sounds severe. A remote attacker within Bluetooth range (hundreds of meters with a decent antenna, more with the right equipment) has remote code execution and privilege escalation capability.
But it's all open source. (Score:1)
couldn't they just... uh... fix it?
I get why these security advisories exist for other OS's, since they can't just be downloaded and patched.
But this is Linux's reason for existing.
Re: (Score:2)
Contact Tracing (Score:2)
It's a good thing we haven't recently been asking people to continuously enable Bluetooth on their phones to enable contact tracing...
Weeks before is better than seven years after (Score:5, Insightful)
ESR didn't say "all bugs are don't exist", if that's what you thought. He didn't say "there are never any bugs in open source software".
Around that time, Internet Explorer had a known issue that Microsoft had listed on MSDN for two years, with no fix. Two years after publication of CATB, four years after the issue was known, Microsoft released a partial fix - because they couldn't figure out how to actually fix it. It was another three years before the responsible team at Microsoft finally fixed the bug. Seven years from finding the bug to a proper fix.
The "many eyes" quote you referred to is:
--
Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
Or, less formally, "Given enough eyeballs, all bugs are shallow.'' I dub this: "Linus's Law''.
My original formulation was that every problem "will be transparent to somebody''. Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. "Somebody finds the problem,'' he says, "and ***somebody else understands it.***"
--
When Shellshock came out, just on one mailing list alone there were about 150 of us looking at it and trying to find the best solution. People were proposing different patches and adjustments to the functionality. We were digging deep into the problem. A few hours after Shellshock came out, Florian Weimer said on the list that the issue could not be "fixed", the feature couldn't be patched to make it safe. He said the feature needed to just be disabled, removed, because it could not be made safe. Over the next two days several people submitted patches to make it safe. For every suggested patch, someone found a way around it. None of the patches made it secure.
About 2 1/2 days in, it became apparent to everyone that every patch was bound to fail; we started to see why you simply couldn't have that function and be secure. We started to see what Florian had seen immediately. We had been digging deep, trying to understand the implications of every possible change. For Florian it was shallow, the fix was obvious to someone, and that time someone was Florian. "Given enough eyeballs, all bugs are shallow; the fix will be obvious to someone". Not "all bugs are not exist".
Contrast the 2 1/2 days to come to a thorough understanding and proper fix for the bug in the open source vs the seven years the issue languished in IE, with a broken half-fix for three years.
For the current issue, it was fixed last month and revealed yesterday. "Problem will be characterized quickly" - I'd call several weeks ahead of time quickly.
Re: (Score:2)
Re: (Score:2)
Again, nobody said there aren't bugs. That idea exists only in your mind. You have thought up a false idea all on your own, and realized it was false.
The point ESR made is that it doesn't take seven years to figure out how to fix it. It also didn't go three years with a wrong "fix" that didn't actually fix it. That's because with hundreds of people looking at it because for one person (Florian that time) the fundamental problem and needed fix was intuitively obvious, while the rest of us were trying to fi