Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Security Virtualization

Theft-as-a-Service: Blocking the Cybercrime Market 24

Nerval's Lobster writes "The same layers of virtualization that have made networked business computing so much more convenient and useful have also given bad guys much easier access to both physical and virtual servers within previously-secure datacenters. A group of engineering researchers from MIT has demonstrated one approach to making secure servers harder to access using a physical system that prevents attackers from reading a server's memory-access patterns to figure out where and how data are stored. Ascend, which the group demonstrated at a meeting of the International Symposium on Computer Architecture in Tel Aviv in June (PDF), is designed to obscure both memory-access patterns and the length of time specific computations take to keep attackers from learning enough to compromise the server. The approach goes beyond simply encrypting everything on the whole server to try to shut off one of the most direct ways attackers can address the server directly — whether the server is an air-gapped high-security machine sitting in an alarmed and guarded room at the NSA or a departmental server whose security settings are a little too loose. Other ways to try to obscure memory-access patterns were built as applications to run on the server. Ascend is the first time a hardware-only approach has been proposed, and the first to approach an acceptable level of performance, according to Srini Devadas, Edwin Sibley Webster Professor of Electrical Engineering and Computer Science, the MIT researcher who oversaw the team developing the hardware."
This discussion has been archived. No new comments can be posted.

Theft-as-a-Service: Blocking the Cybercrime Market

Comments Filter:
  • by Shoten ( 260439 ) on Thursday July 04, 2013 @11:36AM (#44188317)

    The overwhelming majority of breaches are not exotic. It's been shown that 85% of recent breaches would have been stopped by four fundamental security processes: patching, proper antimalware (both signature-based and whitelisting) and restriction of user access rights. Exotic hardware-based solutions to protect data in RAM do not help you when the application server itself has been compromised and the attacker has the same rights to the Oracle DB that your SAP instance has. I think it's great that people are working on defenses against these kinds of attacks, but the fact of the matter is that the way most organizations manage security, this is like getting vaccinated against Anthrax when you're a guy who rides a motorcycle drunk without a helmet every day. It's dealing with the wrong risk.

    • Kind of this this Collapsible bike helmet [] made by a guy who rides a brakeless fixie in slip-on shoes. But what you say is really right. Most of the breaches are from just that, people getting remote desktop or SSH access to the servers. Weak passwords, services accessible directly from the internet, and other easily solvable problems means that this kind of stuff just shouldn't happen. But it still does on quite a regular basis.
    • So, what, would you rather MIT's electrical engineers study the art of persuading lazy sys admins to keep things updated? You've like gone to a bbq and complain about the lack of veg options.

      • by Shoten ( 260439 )

        So, what, would you rather MIT's electrical engineers study the art of persuading lazy sys admins to keep things updated? You've like gone to a bbq and complain about the lack of veg options.

        I didn't go to the barbeque...the barbeque came to me. I'd rather MIT's engineers study ways to simplify the processes that are failing, rather than invent entirely new ones that don't solve the current problem as it exists. Yes, I do believe that science should actually serve a needed function, in the end. I don't think that having a Ph.D. makes someone a unique and special snowflake whose every effort is something we should cherish like a gift from God. They're framing this as practical science, meant

        • by icebike ( 68054 )

          They're framing this as practical science, meant to be applicable...but there's no problem to apply it to.

          To be fair, they are framing it as a solution to the problem of running sensitive data and programs on untrusted (cloud) computers beyond one's immediate control.

          That might make sense except they had to assume into existence some special hardware.

          They had to postulate the existence custom processor. And they had to accept a 13.5 times greater execution time imposed by such a processor.

          Where would this be used? I'm totally at a loss to speculate what kind or work load would justify (somehow) insuring that

          • by Shoten ( 260439 )

            Several very good points...and in fact, those points are only the tip of the iceberg of why this solution makes no sense in that model. There are multiple security problems with multi-level security with cloud computing above and beyond what this would solve. And yes, I get the idea that maybe if all the problems are solved, cloud computing could be usable that way...except that nobody's *ever* solved the MLS challenge in a way that was usable. Trusted Solaris is as close as it got, and it was too hard t

    • by mianne ( 965568 )

      You are correct. The reason why you are correct is key though. You can keep everything up-to-date, and lock down systems as tight as you want. But as long as any user has legitimate access to the system; there are weak links in the chain. If a user has access to the internet or a phone, they're susceptible to social engineering attacks.Email or web in particular, exposes the company to spear phishing attacks. Access to I/O ports or removable media devices creates a potential attack vector. Heck even without

      • by icebike ( 68054 )

        Social Engineering and spear phishing attacks are easier to protect against than hand waving into existence a custom processor architecture installed in untrusted computers running on an untrusted cloud so that you could send encrypted data and programs to run there on.

        In fact by simply limiting what each legitimate user can do to JUST those things needed to do their job, you can cut the vast majority of Social Engineering and spear phishing opportunities. After that, education of the user is all that is r

    • by kjs3 ( 601225 )
      True. The vast majority of postmortems my team has done show the compromised server was "low hanging fruit", usually because of poor patching. I wait patiently for the day that IT Ops people learn that the bullshit "why patch the box, it's stable?" attitude is going to screw them in the end. I may not live that long, however.
  • You mean that somewhat some people outside your organization get inside the data center where you have your servers, and starts to do live forensics to extract data from them that can't be get from the network, instead of, i.e. just unplugging and taking the servers, or backups, or just valuable hardware that should be small and be around. That kind of access and motivation seems to go mainly to the data center personnel (following orders from govt agencies or not), or NSA/CIA/etc related people.

    Then yes,

  • This is a solution to a very special problem - one program with cryptographic code running in a VM, and a hostile program running in the same VM. There are some crypto algorithms which can be broken if you can submit keys to them, and watch how long they run or what cache misses they make. This is very tough to do in the real world.

    It also comes up for crypto modules which do DRM for content owners. There, an attacker can watch the signals and interfere with the operation of the crypto unit to slowly ext

    • Thank you for making that clear. Slashdot datacenter article failed to explain what problem they were going to fix.

Why won't sharks eat lawyers? Professional courtesy.