Forgot your password?
typodupeerror
Security Privacy

Transforming the Web Into a Transparent 'HTTPA' Database 69

Posted by timothy
from the security-still-needed-note dept.
An anonymous reader writes MIT researchers believe the solution to misuse and leakage of private data is more transparency and auditability, not adding new layers of security. Traditional approaches make it hard, if not impossible, to share data for useful purposes, such as in healthcare. Enter HTTPA, HTTP with accountability.
From the article: "With HTTPA, each item of private data would be assigned its own uniform resource identifier (URI), a component of the Semantic Web that, researchers say, would convert the Web from a collection of searchable text files into a giant database. Every time the server transmitted a piece of sensitive data, it would also send a description of the restrictions on the data’s use. And it would also log the transaction, using the URI, in a network of encrypted servers."
This discussion has been archived. No new comments can be posted.

Transforming the Web Into a Transparent 'HTTPA' Database

Comments Filter:
  • by jhantin (252660) on Saturday June 14, 2014 @07:50PM (#47238427)

    All I see here is a bunch of stuff that all depends on trusted third parties... and in security circles, "trusted" means "can screw you over if they act against your interests". In this case it relies on trusted identity providers, labeled 'Verification Agent' in the paper.

    It all breaks down if a verification agent is compromised, and the breach of even a single identity can have severe consequences that the accountability system cannot trace once information is in the hands of bad actors.

    The authors effectively admit that this entire mechanism relies on the honor system; it explicitly cannot strictly enforce any access control, because in the context of medical data access control may stand between life and death.

    Finally, the deliberate gathering of all this information-flow metadata would add another layer to the panopticon the net is turning into.

  • by NotInHere (3654617) on Saturday June 14, 2014 @08:00PM (#47238473)

    The original paper [mit.edu] has examples where such a DRM-based system has some legitimate usages. One was for patient data. If you want to eliminate special client software there, you can have this system, and run everything on the browser. The system abstracts and standardizes the access control, which is hopefully already present, and helps to close holes in the implementation. For intranets the model perfectly makes sense, however deployment into the wild wide web is of course extremely harmful.

    Media people didn't alter the story, as the paper already contained discussion about www deployment, but only picked the bullshit non-intratnet-web part.

  • by tlambert (566799) on Saturday June 14, 2014 @09:02PM (#47238633)

    Is it a bad summary or a stupid idea?

    Yes.

    As it is explained, it seems that system does not cover the case where someone gets the data and leaks it

    It's advisory access controls with voluntary indications of use, with transaction metadata logging.

    (1) The rights you could be granted are based on the object, not the actor and the object
    (2) You obtain the exported rights list
    (3) You voluntarily provide a purpose in line with the rights which are granted
    (4) Your voluntary compliance with the rights list is logged as metadata, because collection of metadata isn't controversial at all
    (5) You retrieve the data
    (6) You use it however the hell you want, because you're a bad actor
    (7) If you are a good actor, you enforce use restrictions in the client

    and...

    (8) You try to sell the idea as somehow secure, even though it's less secure than NFSv3, since NFSv3 at least requires the client to forge their ID

    So "Yes" - a bad summary, and a stupid idea.

  • by fuzzyfuzzyfungus (1223518) on Saturday June 14, 2014 @09:08PM (#47238643) Journal
    Even older than that. The 'Send the data, along with a description of restrictions on the use of the data' is (along with a dash of 'semantic web' nonsense, which appears to be a new addition), classic "Trusted Computing".

    Mark Stefik, and his group at Xerox PARC, were talking about 'Digital Property Rights Language' back in 1994 or so, and by 1998, if not earlier, it had metastasized into a giant chunk of XML [coverpages.org]. That later mutated into "XrML", the 'Extensible Rights Management Language', which eventually burrowed into MPEG-21/ISO/IEC 21000 as the standard's 'rights expression language'.

    Some terrible plans just never entirely die.
  • by raymorris (2726007) on Saturday June 14, 2014 @11:56PM (#47239065)

    I think you've missed the point. Quoting the beginning of the article:

    > HTTPA,designed to fight the "inadvertent misuse" of data by people authorized to access it.

    I've had this conversation more than once:

    Bob - Why did you tell people about ___. That was supposed to be a secret.
    Sally - Oh, I'm sorry, I didn't realize that was supposed to be kept confidential.

    Also this thought "oops, what I just said was supposed to be kept confidential. I messed up."

    Those are the situations the protocol is supposed to address, the INADVERTENT release of confidential data. It's the digital equivalent of stamping a paper "confidential, for abc use only". Any time the system accesses the data, it is also reminded of the confidentiality rules attached to that data. This so they can, through processes and software, avoid mistakes. For example, a client could be set so that an attempt to copy confidential data to clipboard instead copies the reminder "this is confidential information", so someone copying it into an email without thinking gets reminded.

"Marriage is like a cage; one sees the birds outside desperate to get in, and those inside desperate to get out." -- Montaigne

Working...