Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Bitcoin Security Software The Almighty Buck Hardware

John McAfee's 'Unhackable' Bitfi Wallet Got Hacked -- Again (techcrunch.com) 108

Earlier this month, computer programmer John McAfee released "the world's first un-hackable storage for cryptocurrency & digital assets" -- a $120 device, called the Bitfi wallet, that McAfee claimed contained no software or storage. McAfee was so sure of its security that it launched with a bug bounty inviting researchers to try and hack the wallet in return for a $250,000 award. Lo and behold, a researcher by the name of Andrew Tierney managed to hack the wallet, but Bitfi declined to pay out, arguing that the hack was outside the scope of the bounty. TechCrunch is now reporting that Tierney has managed to hack the Bitfi wallet again. An anonymous reader shares the report: Security researchers have now developed a second attack, which they say can obtain all the stored funds from an unmodified Bitfi wallet. The Android-powered $120 wallet relies on a user-generated secret phrase and a "salt" value -- like a phone number -- to cryptographically scramble the secret phrase. The idea is that the two unique values ensure that your funds remain secure. But the researchers say that the secret phrase and salt can be extracted, allowing private keys to be generated and the funds stolen. Using this "cold boot attack," it's possible to steal funds even when a Bitfi wallet is switched off. Within an hour of the researchers posting the video, Bitfi said in a tweeted statement that it has "hired an experienced security manager, who is confirming vulnerabilities that have been identified by researchers."
This discussion has been archived. No new comments can be posted.

John McAfee's 'Unhackable' Bitfi Wallet Got Hacked -- Again

Comments Filter:
  • by OzPeter ( 195038 ) on Friday August 31, 2018 @09:05AM (#57230608)

    What more can you say?

  • What's the point of advertising bounties if you don't honor them?

    • What's the point of advertising bounties if you don't honor them?

      Keeping the money. Which makes much more sense if we're talking about your money....

      • by Duds ( 100634 )

        Especially as you'll need that money to pay off all the users who sue you after they're hacked.

    • by Desler ( 1608317 )

      McAfee probably spent the money on bath salts and couldn't pay out.

  • by bickerdyke ( 670000 ) on Friday August 31, 2018 @09:12AM (#57230628)

    No software and no storage?

    How is it supposed to store and encrypt anything?

    Is that the same McAfee who got stuck on some bad drugs a while ago and was in the news for some statements of similar sanity?

    • Re: (Score:2, Informative)

      by Anonymous Coward

      yep, it's the same McAfee that was wanted for questioning by Belize law enforcement about the murder of his neighbor a few years ago. He fled the country claiming the Belize government was setting him up. He's also been accused of committing rape while in Belize, and just last year he fired several shots at the walls and floor of his residence while his wife was there because he thought the Belize government agents were there to kill him.

      The guy is a dangerous drugged out loon.

      • Andrew Tierney should be thankful John didn't just shoot him . . . (yet) If he wants to meet someplace dark and private to pay out the bounty, I recommend Andrew not go alone. McAfee is a scary guy even in the light.
    • by bluefoxlucid ( 723572 ) on Friday August 31, 2018 @09:33AM (#57230734) Homepage Journal

      Yes. It's the kind of confused ideal you get out of a lunatic with a large ego.

      I've been working on electronic voting machines and high-integrity elections. Do you know what that takes? You publish the image ahead of time; you image the machines at poll open while people observe, and then let them copy the read-only media to verify no tampering; you generate vote count statistics on the machines before copying the votes off to send them up to the board, ensuring we can all verify that the ballots observed are the ballots reported.

      That narrows the window of attack to the time between poll-open and poll-close. As long as you have public observers during that time, nobody can tamper with the machines. You have non-repudiation of the software, the machine's initial state, and the ballots as cast.

      If you have no public observers, then bought-off election staff can enter the machines when nobody's around and modify the vote counts or the software loaded.

      I'm building on a model of using EVMs to encode ballot sheets onto smart cards, then take the smart card to an electronic ballot box which displays the ballots and allows you to cast them.

      The touch interface exposes approximately zero attack surface. You're putting boxes in order (ranked votes) or checking boxes. Besides that, separating the ballot box ties the entire attack surface to clicking "Accept" or "Reject" and to reading the data on the smart card.

      The ballot box itself has to deal with the smart card.

      That's tricky. On one hand, I can definitely validate input data and protect from smart card attacks: there will be no hacking by using a tampered smart card. On the other, someone could load a smart card with forged data and just stuff votes--which the vote count display overhead is supposed to prevent (one person goes in, count increases several times), but then what? Election judge comes in and voids the prior X ballots cast? We now have a method to edit votes during the election?

      We could have each EVM create a Curve25519 key pair and put the certificate onto a smart card, which we then copy into the ballot box. Once we confirm all machines are set up, that's it: no more keys added, no more EVMs can send votes. The small strip of data on the smart card has a cryptographic signature.

      Now: is your Ed25519 signature verification library vulnerable to attack by giving a bad encrypted signature?

      Fortunately, we can audit these code paths heavily. They're small. They can perform strong validation. It's possible to guarantee you can't hack the ballot box by tampering with a smart card because only the EVMs have the encryption keys generated that morning (on the EVMs themselves) to sign the smart cards.

      So long as you don't have a wireless chip (bluetooth/wifi), don't plug it into any kind of network, and don't let anyone physically tamper with the machine during voting, it's unhackable.

      You have to remove the attacker to make it unhackable. If someone can attack it, you have no way to guarantee they can't successfully attack it.

      Why do you think I worry about the signature verification code path? That's the single uncontrolled attack vector. The defense there is to "make sure that 30 lines of code is correct". Cringe.

      People with incredible hubris declare they have made an unhackable Network server or keychain device; then they get swarmed by gremlins prying into every seam they can find.

      • Re: (Score:2, Insightful)

        by Immerman ( 2627577 )

        What if the your smart card is corrupted in such a manner as to exploit a flaw in your data-reading routines to corrupt the software itself? That's a notoriously vulnerable attack surface right there - we're *still* finding new ways to compromise data loading routines for common formats that are decades old, though you could hopefully simplify the format to . Heck, the smart-card formatting itself could be corrupted, attacking through the OS instead of the voting software. (Though I find it unconscionable

        • by bluefoxlucid ( 723572 ) on Friday August 31, 2018 @11:46AM (#57231590) Homepage Journal

          What if the your smart card is corrupted in such a manner as to exploit a flaw in your data-reading routines to corrupt the software itself?

          Not feasible. Either the driver would fail to read or the software would receive data which fails to validate. Computers aren't physical things: you don't rust a pipe to break through, but rather feed it something that triggers bad logic in a wholly-functional subroutine.

          we're *still* finding new ways to compromise data loading routines for common formats that are decades old, though you could hopefully simplify the format to

          Rigid validation. Many data formats are incredibly-complex, and a simplistic and predictable format would be used for ballot data.

          the smart-card formatting itself could be corrupted, attacking through the OS instead of the voting software.

          Smart cards have a sort of protocol where you get raw data. It's not a 2GB SD card with a file system; it holds a few kB at best. That passes directly through without being processed by the OS; and if there are packet length specifiers and the like, you can follow the code path in the driver and ensure you have predicted the length of each data packet, allocated a buffer that size, and only copy that many bytes.

          Though I find it unconscionable that any voting machine would incorporate the huge attack surface of an OS in the first place

          You'd have to. An OS handles inputs and outputs, along with process scheduling. If you didn't use an OS, you'd have to write a lot of extremely-complex stuff from scratch to control the system. You're then introducing the same kind of attack surface, but with less vetting, thus more risk.

          Just because an acceptable solution isn't readily apparent is no reason to avoid exposing the fact that your system has been compromised

          Whenever you take an action to intervene--if you step in and un-stuff ballots when someone sneaks extra smart cards in--you have a chance to pull some sleight-of-hand and tamper with votes. Prevention preserves integrity.

          here's a possibility: the oversight officials have to push a "commit" button periodically to commit the last N temporarily recorded votes to the permanent record - or a cancel button to invalidate them.

          Creates bottlenecks and people who leave won't necessarily know their votes were confirmed. You're opening for official intervention, which opens up for tampering.

          why are you trying to do electronic voting at all? What _exactly_ is the point?

          Reduces the number of attack points and allows us to retain election integrity.

          Paper ballots are unhackable,

          Paper ballots are routinely altered, thrown out, lost, found, manufactured, and otherwise tampered. Thousands of votes go uncounted or appear somewhere along the way all the time. Election judges get to decide if a ballot is valid based on if it has a smudge, scratch, stray mark, or anything else.

          It's *far* easier to secure a physical ballot box, and doesn't take that long to count - especially since the volunteer pool for vote counters is directly proportional to the number of voters

          A pool from which you can find a few volunteers, maneuver them into the counting, and have them manipulate the error rate. It's done all the time.

          Just like that we've solved almost all the existing problems with paper ballots

          Not at all. We have undervote and overvote problems, spoiled ballots, manipulation of the ballots, and of course the complexities introduced by ranked systems which resist tactical manipulation (and can be hard to follow during the count, thus allowing for further manipulation of the vote).

          you've not once trusted a computer without verifying its wo

          • I'm a programmer, I've got a pretty good idea how software works - and if you say it's not feasible for carefully crafted data to invoke unintended behavior in the data-reading routines and take control of the software, then you have no business even attempting to building such a product. But yes - a sufficiently simple file format with sufficiently robust validation should make that difficult - just make sure your validation tool is at least as robust as your data parsing routine.

            >An OS handles inputs

            • if you say it's not feasible for carefully crafted data to invoke unintended behavior in the data-reading routines and take control of the software, then you have no business even attempting to building such a product.

              Let's see.

              if (dataFile.Length < GuidSize*3) throw InvalidDataException();
              Guid pollingLocation = new Guid(dataFile.PopBytes(GuidSize));
              if (pollingLocation != this.PollingLocation) throw new LocationAuthorizationException();
              Guid voter = new Guid(dataFile.PopBytes(GuidSize));
              if (this.Election.Voters.Contains(voter)) throw new AlreadyVotedException();
              // Get empty BallotSheet containing correct races. Throws an exception if invalid ballot sheet.
              BallotSheet ballotSheet = this.Election.CreateBallotS

              • Yes - while I'm not going to do an in-depth analysis of your sample code, a simple enough format, and a rigorous enough auditing of the code, should make it possible to approach 100% confidence that it's not a potential attack vector.

                It has to handle software interrupts, display systems, storage routines, and the like. You also have to have a way to write this complex, graphical software in a manner which is human-maintainable and as little prone to software flaws as possible--and we all know software always has flaws.

                Why? You're thinking like a Windows programmer whose grown accustomed to writing code for general-purpose OS. What do you need interrupts for that couldn't be handled with polling? Your system/software storage can be EEPROM, or similarly configured flash, easily accessed via

                • Why? You're thinking like a Windows programmer whose grown accustomed to writing code for general-purpose OS. What do you need interrupts for that couldn't be handled with polling?

                  Well the touch-screen device, keyboard, or whathave you raise interrupts when there is data. You can't poll everything, you know.

                  Your system/software storage can be EEPROM, or similarly configured flash, easily accessed via mapping into memory space (many embedded systems do it that way).

                  Oh, yes, and we can swap the EEPROM chip out before and after the election so nobody knows we used a tampered system.

                  And what is this complex, graphical software you're talking about? You're recording ballots, not simulating physics or playing video games. And graphical? You're planning to include photos of the candidates?

                  You're drawing buttons and inputs on-screen. It's actually not a trivial process.

                  And, yes, you have photos of the candidates, along with clips of their voice. It's necessary for impaired voters: we allow the illiterate, the deaf, and the blind to vote.

                  No, it's insurance against ballot stuffing. Without that, if someone manages to post even one extra vote (slips in an extra SD card or whatever), how do you invalidate those votes?

                  That'

                  • So don't use a device that raises interrupts - do away with the USB bus and access the hardware directly, lots of devices do that. Or limit yourself to an extremely bare-bones interrupt handler - think DOS as the operating system, not Windows. If your total OS is more than a few tens of KB you've added a massive amount of unnecessary vulnerabilities.

                    Seems like you agree there's a ballot-stuffing risk, so how do you address it? I gave you one example that would be very effective and minimally cumbersome

                    • Seems like you agree there's a ballot-stuffing risk, so how do you address it? I gave you one example that would be very effective and minimally cumbersome, which you don't like. So what's your solution?

                      The only way to control ballot stuffing is public observation. Paper ballots allow collusion to evade this.

                      In an electronic system, you can use a handling chain of HMACs and digital signatures. That means the election judge has to put a card into the electronic ballot box during initial configuration after imaging, and carry the card to each voting machine to do the same, then back to the EBB. A card for each of the election staff also must be inserted into each EVM at imaging, that one being the type

      • If you write electronic voting machine software you're a traitor, no if's and's or but's.
        • by davidwr ( 791652 )

          If you write electronic voting machine software you're a traitor, no if's and's or but's.

          I see where you are coming from but I disagree.

          * There are elections that are not secret ballot, such as votes in Congress which are almost always done either electronically or by voice vote.
          * There are elections that are almost entirely done by proxy, such as stockholder elections.
          * There are electronic voting systems that are nothing more than a computer-assisted counter of a paper ballot.
          * In theory (and hopefully in practice) there are electronic voting systems that are nothing more than a print-on-dema

          • Computers can be hacked, all computers are insecure given enough time (or secret knowledge, or collusion on the part of vendors,) using them for elections is irredeemable in the eyes of any sane person.
            • by davidwr ( 791652 )

              Computers can be hacked, all computers are insecure given enough time (or secret knowledge, or collusion on the part of vendors,) using them for elections is irredeemable in the eyes of any sane person.

              So can people-based systems. Think bribery, blackmail, having a partisan spy in the process, etc. By having both computers and humans doing the vote-counting in parallel - using computers to count paper ballots to provide "instant results" with human auditing in the days that follow - it's harder to hack the vote count. If the computer is accurate but the people are cheating, the discrepancy will be noticed. If the computer is hacked but the people are honest and accurate, the discrepancy will be notice

              • People are more satisfying to punish when caught. If there's a hack on a computer not only may you never find the culprit, but the manufacturer, developers, etc have enough leniency to not go immediately to prison for life, which is wrong.
              • If the computer is accurate but the people are cheating, the discrepancy will be noticed. If the computer is hacked but the people are honest and accurate, the discrepancy will be noticed.

                How do you tell who is cheating? Did you introduce fake paper ballots, or did the computer drop some ballots? Did you manage to "lose" some ballots, or did the computer? Did the computer record votes but not print ballots for them so that people would discard those votes as computer error/tampering when they were real votes?

                You might notice the discrepancy, but how will you correct the errors?

                Paper ballot verification is security theater. Language is weird: an electronic paper trail is stronger tha

            • or secret knowledge, or collusion on the part of vendors

              I've removed the possibility of collusion. That's the problem I'm actually attacking.

              Computers can be hacked, all computers are insecure

              Hacking requires attack surface. That means you need to accept complex, user-controlled input. Wireless and wired networks immediately create an uncontrolled source of attack surface, and so are impossible to secure.

              The required surface exposure is actually fully-auditable in reasonable human time. The biggest part is access: a voter has to be identified to a voting machine to vote. You can use an EVM to cast vote

          • There are elections that are not secret ballot

            EVMs must not expose who cast what ballot.

            In theory (and hopefully in practice) there are electronic voting systems that are nothing more than a print-on-demand ballot-printer for a paper ballot or a computer-assisted vote-marker of a paper ballot.

            Paper ballots create a route for spoofing. Print-on-demand paper ballots imply easy ballot forgery.

            because the actual ballot is seen and cast by the voter and is inherently audit-able, it is no more or less "hack-able" than a pure paper-ballot system would be

            Paper ballots are lost, found, altered, and simply ignored all the time. They're ripe for electoral fraud; the trick is they're generally ripe for fraud by the election staff, not the voters. Paper ballots are made with all kinds of security against forgery so the voter can't sneak in counterfeit ballots to stuff ballot boxes.

            You can make an electronic voting s

      • I like your thoroughness in design but I'll accept a little bit of insecurity in my voting machines if I know I can audit them. See https://en.wikipedia.org/wiki/... [wikipedia.org] This means I can check that my vote was cast and counted correctly. Assuming some people check their personal vote then the probability of multiple invalid votes being cast or votes being altered becomes vanishingly small.
        • Well I have $10,000 and can buy votes for around $5 each. You can prove you voted for the candidate for whom I paid you to vote, correct? Show me your vote.

          E2E systems don't provide election security. It's possible to verify a voter's individual vote as valid while stuffing votes for people who didn't vote. Further, the system has to be able to track and identify each voter's vote--and allow proof of vote--to function, which allows vote-buying, coercion, and the like. Scratch-and-vote and three-vote

    • by Anonymous Coward

      It uses a special cryptographic bath salt to generate totally insane random numbers

    • by Immerman ( 2627577 ) on Friday August 31, 2018 @09:37AM (#57230764)

      How about ""hired an experienced security manager, who is confirming vulnerabilities..."

      If you're trying to make the world's first unhackable device, how exactly is such a person not already a primary member of your team?

      • by iserlohn ( 49556 ) on Friday August 31, 2018 @09:48AM (#57230824) Homepage

        Not only that, they seem to be missing the basics of PR. How hard is it to phase it as - "We have hired an external security expert to independently verify the reported vulnerabilities" ?

        • by dissy ( 172727 )

          Not only that, they seem to be missing the basics of PR. How hard is it to phase it as - "We have hired an external security expert to independently verify the reported vulnerabilities" ?

          Apparently about as hard as saying they will pay the bug bounty as promised.

          Not that I would suggest anyone carry out this (probably illegal) action, but it would be pretty hilarious if this story ended instead with:
          "TechCrunch is now reporting that the researcher has managed to hack the Bitfi wallet again, this time extracting the exact amount of the bug bounty from McAfee's own funds"

  • But that's what you get form most of the "bounty programs" these days. They have no honor.
    • by gweihir ( 88907 )

      Honor is exactly the problem. Not the only place where that can be observed, though. For a bug-bounty program to be successful, you need to be generous with the payouts. You need to recognize, applaud and reward the people that hand in things, even if they are not quite in scope. It is still far cheaper than paying real security experts to do an in-dept evaluation. But if you give the impression that it is questionable whether you will pay at all, people will just go elsewhere and the whole thing becomes ma

  • by Anonymous Coward

    How can you do a project like this without an experienced security manager on the team. This statement to me is a huge red flag about how they develop product.

    • Simple, you are John "insane in the membrane" Mcafee. This guy is so full of shit I can smell it from here, and also the scent of blow and jizz wafting off of a few trans hookers
  • by jellomizer ( 103300 ) on Friday August 31, 2018 @09:17AM (#57230656)

    If it is designed for a computer (a man made machine) to read the data and decrypt the data to be shown and used then there is a way to hack it. The best we can get is having it secure enough, to make mass production of the hack impossible or just expensive and performing such hack being a time consuming process.

  • ...but not having one on board didn't stop them from calling their device unhackable.
    • by OzPeter ( 195038 )

      ...but not having one on board didn't stop them from calling their device unhackable.

      You do know who was making the claims don't you? He doesn't exactly have a stellar relationship with the truth.

  • How did the old truism go again? As soon as the hacker has access to the hardware, you've LOST.

    • Except this isn't true. There is hardware that self destructs if you try to physically tamper with it. It is mostly accurate, but not in every case*. * No pun intended
      • Except this isn't true. There is hardware that self destructs if you try to physically tamper with it. It is mostly accurate, but not in every case*. * No pun intended

        Except that there is no such thing as tamper proof short of the device being a one-off version, which doesn't bode well for backups, etc. For example, if you have a "tamper proof" box but it is mass produced (which everything is today) then someone, with enough time, effort, and money, can find a way around the "tamper proof" mechanism. Then all they need is physical access...

        • OK. Great. Now go back and read the OP. The claim wasn't that a select few very determined people *might* succeed. It was that as soon as *any* hacker gains access you have *immediately* lost. It helps if you read the posts and think a bit before replying.
          • OK. Great. Now go back and read the OP. The claim wasn't that a select few very determined people *might* succeed. It was that as soon as *any* hacker gains access you have *immediately* lost. It helps if you read the posts and think a bit before replying.

            I see that you mis-understood my post. My post wasn't at all about the reward. I agree that the researcher should have received the reward.

            My post was only about refuting the assertion that a self-destruct device would thwart a determined attacker with physical access to a device.

  • All I'm saying is, Tierney needs to make sure that McAfee doesn't move in next door. We all know how that turns out.
  • Who in the world took him seriously?
  • If you walk around with a physical crypto-wallet, somebody is going to forcibly take it from you and worry about getting to the contents later. It doesn't really matter whether it is "hackable" or not because once somebody steals the wallet, you don't have the crypto-currency anymore. Even if it were "unhackable" (probably a laughable statement), it's like walking around with a locked briefcase full of cash. Everybody can see you have it if you get robbed, you're out the money, even if the perpetrator ne
  • I don't own a bitcoin wallet so that says it all regarding my competence, but what about buying -for about the same price- one of these open-source hardware, open-source software keys that the German Nitrokey build, originally for storing cryptography signature but now they embark Gbytes of encrypted storage on various internal volumes, one of them hidden with even plausible deniability?
    H.

  • by Anonymous Coward

    More like ShitFi Wallet, amiright?!?

    If I were the security researcher in this story, I would just publish every hack of anything McAfee as a zero-day, and tell McAfee that that will stop when they pay the promised bug bounty... on BOTH bugs, (or all of them,) with interest.

    The interest I would charge would be 100% per day. Each. Meaning, pay now, because tomorrow will cost you double. Oh, and I would apply continuously compounding interest.

    Also, as an aside, I am never using anything in any way connected to

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...