Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security

New Two-Headed Hard Drive Intended To Secure Web Sites 366

dlur writes: "This article states that Scarabs (In Japanese), a Japanese company, is developing a hard drive with two heads, one read-only and another that is read/write. With this comes two cables, the read-only side going to the external web server, and the r/w cable going to an internal protected server. While this should make it quite a bit tougher for script kiddies to place their mark on a page, I doubt it will stop any real hackers from getting to a site's DB as that would still need to be r/w."
This discussion has been archived. No new comments can be posted.

New Two-Headed Hard Drive Intended To Secure Web Sites

Comments Filter:
  • by tps12 ( 105590 ) on Monday July 22, 2002 @01:51PM (#3932052) Homepage Journal
    First a 60-foot squid, now a mutant two-headed hard drive. What next, the announcement of the Bearded Lady Linux distro?
  • Snake Oil (Score:2, Insightful)

    Two hard drives heads, one OS, one root/administrator account. If your box is r00ted, it doesn't matter how many hard drives or hard drive heads you have you have still been 0wn3d.
    • Seems like it would work better as one of the cables goes to your web server box, and the other goes to your dev box.
  • Zaphod's been using this kind of drive for years to store his porn collection.
  • More Speed? (Score:4, Interesting)

    by 1010011010 ( 53039 ) on Monday July 22, 2002 @01:53PM (#3932075) Homepage

    This sounds like a nice drive to use in TiVo-type units as well, so that the read head can return data as the r/w head updates the media, rather than flopping the only head back and forth.
    • Re:More Speed? (Score:5, Insightful)

      by Krellan ( 107440 ) <`krellan' `at' `krellan.com'> on Monday July 22, 2002 @02:43PM (#3932470) Homepage Journal
      I thought of this as well, back when I interviewed at ReplayTV (I didn't get in, but that's neither here nor there).

      Why not make a hard drive with two arms? They would be located 180 degrees apart from each other, so they would never bump into each other.

      Each arm would be able to access the entire range of the hard drive.

      One would be read-write and the other would be read-only, or both of them could be read-write if there would be no significant increase in cost.

      This would be great for TiVo and ReplayTV units, which need to read large continuous amounts of data while writing large continuous amounts of different data! And it would be much quieter than the current one-arm drives, that have to thrash, making the units more appealing in a residential environment (one of the main complaints about the units is that the drives are too loud).

      Considering the large quantities of drives that TiVo or ReplayTV use, is a special order out of the question? I'm sure this has been thought of before, and with a large enough order, anything is possible within reason. Western Digital made a custom drive for a large order, and found it to have such a good idea that it was officially added to their product line! (It's the larger 8MB cache in a "special edition" of their 100GB drive.)

      Unfortunately this kind of drive would not work well with IDE. IDE is designed to wait for one command to complete before executing another command. So this means that the gain of being able to execute read-write commands in parallel would be neutered by this protocol. A solution is to use a SCSI drive that supports Tagged Command Queuing (TCQ)! This drive, if the controller and OS software support it, can stack up multiple commands that can be resolved in any order, as fast as the drive allows. This means that multiple outstanding commands could be sent to the drive, and the drive firmware would be free to execute them in the optimal order.

      This would be a great advantage, as it could allow a slower drive to be used (less power consumption, less heat, less chance of failure). The slowness of the drive would be offset by the two arm design, making the drive effectively twice as fast. It might be even faster than that, as seek time would be reduced to almost nothing when reading or writing simultaneously from two different places!

      The only disadvantage would be increased cost of having to use a SCSI drive (including controller) versus an IDE drive, and a one-time cost of having to add support for TCQ to whatever OS that is being used.

      I wonder if a two-arm drive is being planned for use in ReplayTV or TiVo units? It seems like too good of an idea to pass up....
      • Why not use two hard drives, and a bit of cleverness in the software to write the incoming data stream to one while the user is viewing a stream on the other? This seems cheaper than custom hard drives, and preserves the ability to keep upgrading the capabilities by going to new commodity hard drives as biggers ones continue to get cheaper.

      • Re:More Speed? (Score:2, Informative)

        by nerdbert ( 71656 )
        Nice in theory, but it won't fly. Two arms means replicating some of the most expesive parts of the drive all over again. Double electronics (servo, preamp, channel) because you wouldn't get reaonable SNR trying to share them, more flex cables, more of those nasty suspension systems, arm motors, etc. You could share the backend of the uprocessor (although it'd require a serious upgrade of the processors we use now since none of them have the umph to do a read and servo calcs at the same time), buffer RAM (although you'd need an increase in that, too, to handle two streams), motor driver, platter, motor, and the controller, but other than that you need to replicate many very expensive parts again. I'd guess you'd increase the cost by 50% or more. The idea's floated around the industry before and prototypes have been built, but in the end the performance boost for the cost wasn't there and no such drive had made it into production.

        WD made the bigger buffer because it was cheap. Adding RAM isn't hard and with RAM prices its cheap. Doubling the front end is a nasty, expensive business.
  • Still exploitable? (Score:4, Insightful)

    by Erasmus Darwin ( 183180 ) on Monday July 22, 2002 @01:54PM (#3932087)
    It seems a malicious user could still attempt to serve defaced pages off of a ram disk on the compromised machine. Yes, a reboot will fix the problem, but that's only slightly more convenient than restoring a compromised system from backups. Furthermore, I suspect that the read-only harddrive would encourage admins to become lazier with regard to applying server patches, since the system would be perceived as "secure".
    • A reboot is "only slightly more convenient" than a system restore? Exactly what is your reboot procedure? Does it involve chants, incantations, burning of incense, animal sacrifice, etc? Kind of like saying repainting my car is only slightly more convenient than simply going through a carwash. :)
      • "A reboot is "only slightly more convenient" than a system restore?"

        Assuming that you've good a nice, current filesystem backup that you can send over the network to reimage the machine, sure. I think the same people who would jump through the hoops of setting up this dual-access harddrive are the same people who would have an existing, easy solution on hand, anyway.

    • Well, then you get 2 memory controllers.....

      Ahh, never mind.
  • I could stick a write protect switch on my drive, at least then I could synchronise the readers with the modifiers.

    Having a read and a read-write cable doesn't really solve anything. You really have to take the web-site down while you are updating, otherwise you need a very interesting combo of web site and file system.

    • Actually you don't need to take the site offline when updating.

      The r/w cable is connected to the other head in the drive, the r/w head is controlled by the secure server, one that is not accessable by anyone outside the internal network (thou I'd prefer it to be the physical server).

      That way there is no need to take down the site to do updates, just access the correct location from the internal network.

  • by The_Shadows ( 255371 ) <thelureofshadows@nOSpam.hotmail.com> on Monday July 22, 2002 @01:56PM (#3932103) Homepage
    Too easy... Must resist!
    Nah, forget it.

    "I mean, two heads are better than one."
  • by Cutriss ( 262920 ) on Monday July 22, 2002 @01:56PM (#3932105) Homepage
    As Timothy points out, this only prevents script kiddies from being able to modify existing content using a backdoor or whatnot. However, it won't do anything about denial of service attacks, since the server software and its modules/plugins are all in RAM, and will still be receiving inputs. Buffer overflows and whatnot are still possible. However, defacements will at least go away, and those are the second-most high-profile types of attacks, as they're visible to the general public. Database attacks would be the worst, though, since, as Timothy again points out, they must be writeable.
    • However, it won't do anything about denial of service attacks, since the server software and its modules/plugins are all in RAM, and will still be receiving inputs.

      That's a nice point, however, I don't think this should have any impact on your decision wether to use this product/strategy or not.

      DOS attacks are a problem that are near impossible to solve no matter what hardware you may have (even your 10's of thousands of dollars worth of Cisco routers). This product isn't targetted at DOS attacks.

      Buffer overflows and whatnot are still possible.

      BUT BUT BUT! They are FAR less effective. One of the problems with overflows are that they give you access to the machine. The danger is when they can login to the machine, install all their hacking tools, packet sniffers, and what not. That's where the real damage is done.

      Now, if the ENTIRE hard disk on the web server is read only, and the machine that they use to make changes to the partition is on a complete seperate network (perhaps not even connected to the internet at all) this could be a VERY effective way of limiting damage done (especially if you are carefull about what applications are installed on your server to begin with).

      Database attacks would be the worst, though, since, as Timothy again points out, they must be writeable.

      Finally, this is not necessarilly true as well. If you run a website that provides the user with realtime information (such as stock quotes, or mortgage rates), most of the data is coming from some source internal to your company. You can easily make that database readonly for the web server, and seperate any minimal user info database into it's own read/write database thus further limiting the damage they can do. In fact, if you aren't doing this already you're probably doing something wrong. Here at work we have two seperate copies of our database (replicated in realtime). One is linked directly to our internal accounting system and updated frequently. The other is 100% read only and ALL reports are run from that.

      I'm not nitpicking you, or anybody in particular. This is a GREAT option. It's not perfect yes, but if you really think about it, you can use this thing in many very very powerfull ways (and as mentioned above you can do some similar things by tweaking IDE cables and useing CD roms). Same thing can be said for Linux router distributions running off of read only 3¼ floppies or CD-Roms! :)

      Bryan

  • Huh? (Score:5, Insightful)

    by Tom7 ( 102298 ) on Monday July 22, 2002 @01:57PM (#3932112) Homepage Journal
    You don't need to write to the disk to make a compromised server serve up bogus content.

    Furthermore, we can already do this same thing by mounting a network file system (say) in read-only mode. Other than being funky, what's the point?
    • Re:Huh? (Score:3, Insightful)

      by Mark Bainter ( 2222 )
      The point is that it would be impossible to override this in software. I mean, following your logic you can also just make the files 444. But if someone gains access to that webserver at a high enough level they can change all that. With this scenario they can't.
      • I think his point is valid. Even if a cracker roots a server, he can't change stuff if a separate system dictates permission. Given proper firewalling rules and/or login configuration, the second system is just as untouchable as the host system in this multi-hard dirve configuration. The key difference would be performance (the two systems don't care what the other is doing), but with the typical bandwidth bottleneck, hard drive IO is the least of your concerns, particularly with static content.
        • IF a seperate system dictates permission. But all he stated was it wasn't any different from mounting a remote filesystem readonly. If the remote fs is dictating that, and your scenario is in effect then yes, the only difference will be in performance. (Direct IO to the drive is a fair bit faster than a hop through a router/firewall to another server). However, if all he did is "mount it readonly" then someone gaining root could remount it r/w if the other system were not locked down. Perhaps I am guilty of surface reading and not looking for what he was really saying, but even w/out the security gains you do gain a fair bit on performance.

        • The difference between a network file system being mounted with server dictated permissions and the two-headed hard drive is you don't need the write-server to be on the network. Or at least on an external network.

          The first thing I thought while reading this was, "Damn that's a good idea.. I can't believe I've never thought of anything close to that."

          There are a lot of details, but I think as a basic block to build on for security this is a very good thing.
    • Re:Huh? (Score:3, Insightful)

      This device would make it physically impossible to write. When you mount read-only, there is still at some level a possibility that someone might bypass the read-only lock.

      I actually think this device has limited, but good applications. Anyone serving up static content would be a bit safer with this technology. Of course, logging traffic and such would be a bitch, but the server would only need a reboot if someone broke in.
      • At my previous employer we had a machine running nothing but syslogd (it was an old 386 machine no less!). All our other servers broadcasted their log entries to the syslog server. It ran NOTHING else, and in fact, if you wanted to connect to it you had to physically walk over to the machine and login at the console.

        Without a syslog exploit, that machine is near impossible to break into and a great way to protect your log files. Personally I think any company doing any serious linux server work should be doing something similar! ;)

        Bryan
    • I had the same question but I'd assume the answer would be lower latency on your reads. But assuming you had a high bandwidth network and there was no way to write tto a drive shared as read-only then all this gives you is a proprietary piece of hardware to replace if it goes down. Also now you would have concerned of what to do if one of the paired servers went down.
    • Yes indeed, this is a complicated, sure-to-cause-more-problems-than-it-solves solution to a non-problem. Export filesystems read-only to your static Web servers and read-write to your back-end thinkers (DB servers, content management systems, etc).

      If you're really smart, you're doing all of this on a netapp filer [netapp.com] so that the access speed is as good as or better than local-attached storage (and, yes that's true even though it sounds wacked... it's because of thier NVRAM-based journaling filesystem for which their NFS server code is hand-tuned).
  • by MattRog ( 527508 ) on Monday July 22, 2002 @01:58PM (#3932123)
    As the article poster touched on, this won't do anything if you're concerned with RDBMS integrity (and have a site which requires write access to your RDBMS).

    For static content, it sounds like a cool idea, even if they get root all they can do is view things and not touch. Of course, if that compromised boxen is attached to an internal network to your RDBMS, then they can go to hax0ring the heck out of your DB, they just have to use whatever tools you have installed on the web server.
  • Every e-commerce Web site I can think of requires writing data to the server based on user-entered data using the Web site itself. If I want the site to store my credit card number, or even an account profile with my shipping address, the Web server needs to be able to write to a hard drive somewhere.

    Now, the sites that are the greatest/most significant targets for hackers are the ones that store personal data on the site's users, credit card data being the most valuable. So this hard drive would be useless for the servers that need it most.

    Besides, even if the above weren't the case -- for instance, a banking site that (for some reason) only allowed you to read your account data, not make any transactions online -- does read-only really prevent hacking? All it means is that the hackers can't make changes to the server data; it doesn't mean that they can't steal passwords to access that data. So this might be good for the companies that use it, but it also gives a false sense of security by providing no additional protection to me, the user.
  • NFS? (Score:3, Interesting)

    by Micah ( 278 ) on Monday July 22, 2002 @02:02PM (#3932154) Homepage Journal
    Well an external web server could be set up to mount everything NFS read only. Seems like that would be a bit simpler.... ...but since 99% of sites are dynamic it seems to be an impossibility anyway...
  • by t0qer ( 230538 ) on Monday July 22, 2002 @02:03PM (#3932156) Homepage Journal
    Remember you can do the SAME thing with the hard drive you currently own and a CD drive. Here are some simple instructions...

    A create your website
    B burn it to CD
    C modify httpd.conf, document root, set to /mnt/cdrom

    Voila! and I didn't need to hire a team of japanese researchers to figure it out either.

    • by doughnuthole ( 451165 ) on Monday July 22, 2002 @02:17PM (#3932274)
      Or you could put a switch on IDE pin 23, the write line. Flipping the switch to disconnect the line would prevent any data from being written, while still having the higher speeds and lower seek times of a hard drive.

      It would be simple to just flip the switch, modify your files and then switch it back when you are done so no changes can be made later.

      Even better, put it on an electronic keyswitch mounted on the front of the box, and you have an effective security system for things like demo stations and kiosks.
    • Not exactly a good solution for a high-volume site, though. Can you imagine /. being served from a CD-ROM ?
    • You can do the same with dumb sun boxes, that boot off the net, mount a read only parition with apache/content, and connect to a database. Good thing, if you update the content directly, your stack of boxes all have updated content. Make sure you have a nice storage array that can push data to the boxes.

      Simple, Secure, and very easy to maintain.
      • Well, yeah, but the article was about doing this at the *hardware* level. Anybody can turn the write bit off on a partition, or mount a remote volume as read only. Yes, it's overkill to do it at the hardware level and I'd imagine this is only for the most sensitive of applications.
    • Anyone else reminded of the silly scene where Arnie has to instruct his friends how to flip his neural net from R/O to R/W mode?

  • by Anonymous Coward
    I've often wondered why slower RPM drives don't do dual read-write heads for faster access times and transfer speeds. I'd rather buy a dual-headed 7200 RPM drive with a single Serial-ATA rather than some 15000 RPM drive. The slower dual-headed drive should be able to keep up with the faster RPM drive, yet be quieter (the platter motor -- two head positioning motors would be a bit louder, but not much so), utilize a higher on-disk bit density, and with a good control system, give me better overall speed with a random access usage pattern.
  • ...a read-only head that is connected via one cable to a Web server for people to browse content on the disk file and a read/write head that is connected by another cable to a PC for administrators who renew the data.

    The admins most likely have a network connection on their machine, and if so, that could be hacked.

    Why not a hack that resides in RAM?

    It doesn't seem that this would stop a determined attacker; they'd just do an end run around the tech. It does seem that this would be an excellent way to speed up harddrives in general..audio and video... ohhhh.

  • I can already do this setup for my web server:
    NFS server exports directories with web pages to web server read-only and does not allow logins from the web server (and firewall does its best to block even attempts of such). So even if the web server is fully compromized, the web page cannot be changed.
    Of course, if the web server has writeable disks of its own the cracker could make it serve a page from there instead of the real page; but the two-headed disks will have the same problem, you can only solve it by not giving the web server any writeable disks, boot it from CDROM or from the network.
  • by Mad Quacker ( 3327 ) on Monday July 22, 2002 @02:06PM (#3932189) Homepage
    Of course none of the R/W computers will be in any way attached to the internet.... in the best possible setup a machine that has access to both networks can be compromised, etc. If it's not, updating will be a major pain, so much so they might as well flip the read-only jumper on the drives between updates rather than use this system.

    Aside from the obvious, there are much better uses for more than one head in a drive. Multiple simultaneous seeks, faster seeks, and twice the raw read rate. The market for this should be huge. Hard drive transfer rate is the bottleneck for most tasks, including boot time. All the while with less heat, power, and noise of the 7200+rpm drives.
  • by wowbagger ( 69688 ) on Monday July 22, 2002 @02:06PM (#3932191) Homepage Journal
    This would completely screw up any modern OS (or Windows).

    The OS assumes that it, and it alone, modifies the disk, and that the disk won't change state without the OS making that change. This is one of the reasons you don't want to allow raw disk access from a VMWare or DOSemu session to a mounted file system - the emulated OS will access the disk, and the host OS's file system won't know about it. Boom! Instant corrupted file system.

    In the case of this double-ended drive, the web server will assume that, since it has read the disk once, it needn't read that sector again. Then the write side computer modifies the disk, and the web server won't pick it up.

    I'd rather see a disk with dual heads, and the logic to allow the system to read different sectors at the same time, all kept coherent by the drives controller as a way to increase throughput.

    But to use this as a protection on a web server is just plain dumb.
    • by Anonymous Coward
      What a party pooper. Here we are having a perfectly
      good time talking about something we know absolutely
      nothing about and along comes an educated person that
      decides to spoil the party with a little knowledge.
      Damn you and all of your technology. ;.)
  • I've had a similar idea when it comes to making a log server. If it is only physically possible to write to the log server, then there would be no way someone could erase their tracks.
    • Re:Log server (Score:3, Insightful)

      I've had a similar idea when it comes to making a log server. If it is only physically possible to write to the log server, then there would be no way someone could erase their tracks.

      Why do you think a lot of logservers print to a lineprinter? :-)

      Hell, I think the upper levels of the old Orange Book *required* a hardcopy of logentries, in real time.

  • by rocjoe71 ( 545053 ) on Monday July 22, 2002 @02:07PM (#3932199) Homepage
    Some of the biggest e-commerce blunders have been allowing hackers to read credit card numbers, etc.

    Sure, this new drive can protect existing data from destruction, but we need protection from the wrong people reading the information that's already in a website.

  • This has been done before on a slightly different scale.

    When you have a storage array that supports multi initiator SCSI you can connect one connection of the array to the external facing machine in read-only mode and the other connection to the internal facing machine in read-write mode.
  • Unless you want to go to the trouble of making an OS that is 100% read-only, you'll need to have something writeable on that web server. It'd be cheaper to serve your website off CD-ROM (for the sake of this argument) but who's to keep a script kiddie from mounting your website on a ramdisk or another writable area?

    Besides, you can always make hot-swap hard drive read-only with a jumper block.
    • Unless you want to go to the trouble of making an OS that is 100% read-only

      Actually a capital suggestion. Make a bootable CD-ROM that has your OS, drivers, and webserver. Make a ram drive for temp space, if required. Then have it mount a read-only partition, and you're aces.

      This is quite common, actually. Something gets buggered up, you reboot. New patch? Update your image, burn a new CD, and you're gone.

  • I assume it'd be presented as two different devices. OK, so you mount one as r/o and the other as r/w but the r/o mount wouldn't be expecting nor appreciating changes to what's on the disk being done by another system.

    It's the same deal with a SAN (Storage Area Network). I could easily zone two physical servers into the same LUN on the SAN and make one mount r/o and the other r/w, but unless the OS has some sort of understanding that this kind of thing is going to happen (like a clustering system), I would expect some problems on the r/o mounted system.

    p.s. I'm no expert, I'm just wondering logistically how this is all going to work. It doesn't make sense to me...

    p.p.s. I know there is no real security in mounting a disk r/o because someone could just remount r/w, unlike the physical solution this product provides. But in either case, I would think the issues with two boxes mounting the same file system without clustering would be a problem. If it isn't, I'd love to do something similar with my SAN just for performance and load balancing purposes...

  • How about 2 r/w heads, to increase performance?
  • Ahem (Score:5, Funny)

    by The_Shadows ( 255371 ) <thelureofshadows@nOSpam.hotmail.com> on Monday July 22, 2002 @02:13PM (#3932240) Homepage
    So sayeth the article:

    Hackers will be unable to attack Web sites protected by a new security system unless they can change the laws of physics, according to Naoto Takano

    I'm working on it all ready. So far I've managed to get the relativity theory down to E/2 = MC^(1.9)

    And standard Earth Gravity now has a value of 8.8m/s/s.

    Up.

    And don't try to fill up a garbage bag anytime soon. I've been playing with volume. They're now "Garbage Bags of Holding."
  • This is SO a gimmick. It is no replacement for a properly configured server that's 99.98% locked down. You're going to need a second machine to feed files onto the box anyway, so why not just grant the webserving box read-only access on the file server ? Ideally this server would be totally isolated from the internet, and wouldn't accept write requests coming from the web box. So the only way to update anything is to be sitting on a workstation on the inside, and then to have a valid login on the fileserver.

    This is so frickin' simple, the only reason this Scarabs company is even in business is because there are too many idiots running semi-important servers out there. Having your network admin'd by a clueless fuck is not something that will be solved by a piece of buzzy hardware.
  • I built a system like this with 2 and later 3 heads, years ago. actually wrote an article on it for a magazine:)

    Uses 20 Meg MFM single platter 5 1/2 drives (the tolernaces were the most forgiving, I probably had the only 486 with MFM hard drives in it :)

    It was WAY cool though, (I had it under glass to watch it)

    We took pictures, and the rejected aticle, sealed em in an envelope with a signed notarized affidavit. and had the post office postmark the flap .

    I was going to patent it (this is circa 1992) but I was told by many contemperaries it was the dumbest idea they had heard.
    Now if I can find it watch out
  • This product seems to me to be useless for any site that provides access to database information through dynamic pages.

    Seeing how most web sites who would be in the market for a product like this have "advanced" sites, I would argue this product has a very small potential customer base.

    -Pete
  • by maiden_taiwan ( 516943 ) on Monday July 22, 2002 @02:19PM (#3932286)
    Add one more head, and you've got the perfect drive for a Kerberos server.
  • *yawn* (Score:3, Interesting)

    by Com2Kid ( 142006 ) <com2kidSPAMLESS@gmail.com> on Monday July 22, 2002 @02:21PM (#3932296) Homepage Journal
    Yah so, err, credit card numbers and other personal information is still at risk. Really, defacing is a relatively small threat vs information theft. If I want to get something read I'd likely get more readers from a post on /. with my +1 bonus then from some defacement on a website that will likely get fixed and put back to normal in a matter of a few hours.

    *yawn*

    Seems to me that a database that doing this the other way around (write only head, separate read head) would be the smart way to go, store customer data, but only trusted computers can get any of it off! (though displaying customer info might be a bit of a challenge, heh, oh well, store name and address on regular drive, store valuable information on the special drive)
  • by Reverend Beaker ( 590517 ) on Monday July 22, 2002 @02:21PM (#3932305)
    So if an ettin has a challenge rating of seven, what would a two-headed hard drive be?
    • So if an ettin has a challenge rating of seven, what would a two-headed hard drive be?
      God help me -- I, too, thought this comment was funny. And I am currently a Moderator with 4 points! But I REFUSE to mod this comment up. It's for all of our own good, man! Get out of the house! Talk to girls! Go, go!!
  • This is a classic idea from the early days of computing. The SAGE air defense system used it to synchronize the main and backup real-time computers. Control Data used to make a drive with two combs, one read/write and one read-only, but it was discontinued in the 1970s.

    I once proposed it for a secure application for DOD, back when disk drives were the size of washing machines. The basic idea is to enforce one-way traffic flow. The file system on the read side has to understand how the data on disk can change, so some file system work is needed. Back then, we were more concerned about leaks from secure systems than attackes coming in, so the outside world would have had the read/write end, while the secure world had the read-only end. It turned out that unidirectional fibre optic links were more effective.

  • Old News (Score:3, Funny)

    by guttentag ( 313541 ) on Monday July 22, 2002 @02:23PM (#3932313) Journal
    This has been done before. God created the first secure content site on three... er, make that two... Two read-only stone tablets in the year a-few-thousand-BC. But even they were vulnerable. Legend has it Moses was so frustrated in his attempts to replace the tablets' content with "mozez ownz yoo" he destroyed them.
  • by noahbagels ( 177540 ) on Monday July 22, 2002 @02:25PM (#3932325)
    Great.

    Now, we have to explain one more thing to VCs and MBAs. All they know is there is this thing called a website that exists on a thing called a webserver.

    Hasn't anyone on /. ever taken a security class?
    Has anyone on /. ever worked in on security projects and/or audits?

    Let me break it down for the rest of you:
    This ads exactly zero extra security for a well-run website. Most well-run sites already have seperately firewall'd http-webservers and database machines. Some well-run sites have the application server on yet a third firewall'd network (or vlan etc).

    Any place worth 5cents will not have valued data sitting on an httpd server!

    This is really Ooooga-Boooga in a nutshell for VCs and MBAs trying to make a buck on security-scared VCs and MBAs running other companies.

    I don't buy it.
    Secure your site properly - as one other poster mentioned, for the less-funded (read: cheap/poor/startup/blah) company/service you can simply mount a CD-R with your site's static content on it. Even JSPs can live on a CDr (as long as they're precompiled into servlets, or there's a scratch disk for the JSP-container to compile them).

  • Not sure if they do anymore, but IIRC hard drives already have or have had write protect tabs available. Write protect works just fine for floppy disks.

    Multiple heads seems like it would be a massive extra expense compared to changing the firmware that doesn't really provide a whole lot of extra security.

  • Dual-ported disk drives are nothing new; they've been around in many forms since the 70's (SMD), 80's (DEC SDI), and up through today (SCSI has supported multiple initiators since it graduated from SASI in the mid-80's.)

    Of course, most of the older drives also had prominent lights and pushbuttons on the front that let you write-protect the drive, in some cases on a per-port basis.

    What has often been missing is OS support for dual-ported drives; the lack of support is most conspicuous today. As a result most modern OS's trying to use a dual ported drive will have to "take its turn" having the disk mounted if there's any possiblity the other machine is going to do a write. If the OS doesn't even support the simple concepts of mount and dismount, then you probably cannot use it at all!

  • Prior Art? (Score:3, Funny)

    by El_Smack ( 267329 ) on Monday July 22, 2002 @02:31PM (#3932381)
    According to this [janictradition.org] "The God Janus, husband of Jana, is known as the custodian of the universe, the God who watches over doors and gateways, and the two-headed God of ... "

    Please send any patent inquires to
    Cesear, Emporer of Rome
    123 Pantheon Drive

  • by Zinho ( 17895 ) on Monday July 22, 2002 @02:32PM (#3932388) Journal
    From the article:
    "The original idea of a hard disk having two heads emerged around 1985..."

    Funny that the technology hasn't been implemented after all this time... Or has it?

    From the StorageReview.com reference section:
    "Such hard disks have been built. Conner Peripherals, which was an innovator in the hard disk field in the late 1980s and early 1990s (they later went bankrupt and their product line and technology were purchased by Seagate) had a drive model called the Chinook that had two complete head-actuator assemblies: two sets of heads, sliders and arms and two actuators. They also duplicated the control circuitry to allow them to run independently. For its time, this drive was a great performer. But the drive never gained wide acceptance, and the design was dropped. Nobody to my knowledge has tried to repeat the experiment in the last several years.

    There are several reasons why it is not practical to make a drive with more than one actuator. Some are technical; for starters, it is very difficult to engineer. Having multiple arms moving around on a platter makes the design complex, especially in small form factors. There are more issues related to thermal expansion and contraction. The heat generated inside the hard drive is increased. The logic required to coordinate and optimize the seeks going on with the two sets of heads requires a great deal of work. And with hard disk designs and materials changing so quickly, this work would have to be re-done fairly often.

    However, the biggest reasons why multiple actuators designs aren't practical are related to marketing. The added expense in writing specialty electronics and duplicating most of the internal control components in the drive would make it very expensive, and most people just don't care enough about performance to pay the difference. Hard disks are complex technology that can only be manufactured economically if they are mass-produced, and the market for those who would appreciate the extra actuators isn't large enough to amortize the development costs inherent in these fancy designs. It makes more sense instead to standardize on mass-produced drives with a single actuator stack, and build RAID arrays from these for those who need the added performance. Compare a single 36 GB drive to an array of four 9 GB drives: in effect, the array is a 36 GB drive with four sets of everything. It would in most cases yield performance and reliability superior to a single 36 GB drive with four actuators, and can be made from standard components without special engineering."

    So, from the looks of things, it would be easier and cheaper to use single-head drives in easy-to-put-together configurations than put two heads in the same drive. Admittedly, the StorgeReview.com reference's author didn't mention setting up a read-only/read-write scheme, but the logic still works. I'd guess that it would still be easier to make a RAID container that provides read-only access on one channel and read-write on another.

    Again, from the article:
    "Scarabs is also working on a different version of the technology--instead of putting two heads on a hard disk, the company is connecting two SCSI interface circuits to a conventional hard disk with one head, one set to send read-only electronic signals and the other to send read/write signals."

    This company already knows that their gimmick drive won't sell. No one will buy an over-priced drive with higher probability of failure over a (comparatively) cheap SCSI trick that requires no extra moving parts.
  • Other than expense, why not just use some sort of shared storage appliance. The admin can be allowed to mount the appliance rw, while the webserver can be given read only access? I think EMC [emc.com] has products that do this.
  • A headline to draw in the geek girls?

    Tsk tsk... Timothy!

  • I remember someone telling me from back in the magnetic drum days, that the fastest drums had one head per track, so you only ever had rotational latency delays (average half the the rotation time) - no physical seek (move the head) delays. I often wondered if multiple heads on a modern disk drive would improve performance...

    I know on a modern disk the tracks are too tightly packed to do a head-per-track, but was wondering if you had (say) 2 heads on a single arm seperated by a third the width of the disk, then any track could be read with a much smaller movement (compared to full disk seeks) by seeking with the closest head, and when queuing up reads for an "elevator algorithm" of seeks you could also get performance gains by grabbing out of order data with the "trailing" head.

    I realise the price goes up with complexity, and the heavier head might take longer to settle, but was wondering if this wouldn't give better performance for scattered reads for those who need it (eg servers) and don't mind paying....

    Now I'm a software geek, not a hardware bod, so does anyone know why this isn't done ? (I can guess lots of reasons myself, thanks). Is it effectively just RAID striping on a single disk ?

    And how about more heads (5 across, 10 across...), or 2 sets of heads on opposite sides of the disk to cut rotational latency in half (if kept in step) or ... again, let the disk controller decide to move the closer head ... I know that I can pick items out a heap much quicker with two hands than one due to economy of movement....

    --
    T
  • I've seen a couple of projects on freshmeat that do this. Basically, a daemon sits around and watches files and if they change, they do something about it. This could be anything from logging to sounding an alarm to replacing the content.

    I could have a repository sitting offline storing all of my content (or even everything... OS, databases, scripts, tools, etc etc) and have it "log in" to the servers from the inside and check everything for changes periodically. In a lot of cases, tests could be done from the outside as well (web content specifically). That machine, though physically connected, would simply shut off its interfaces and block everything unless it was doing its work.

    I think a recent website hack occurred at USA Today... such a scheme could have caught the hack within minutes and even have replaced the forged content with whatever was supposed to be there.


  • I've been hearing a lot of people say "clip pin 23 to your IDE cable" to prevent writing.

    Would it be difficult for a company to come up with a "plug between" adapter between the harddrive and the IDE cable? maybe it would have a jumper on it that you could remove, or better yet, plug in an extension cable with a switch onto the jumper location so you wouldn't have to open the case every time a change is made. If there was enough of a demand, these could be manufactured cheaper than IDE cables.

    I think it could be a much cheaper solution to the folks that don't need top of the line. Then again, Mounting the filesystem "read only" would be even easier.
  • Get two of them. One to serve "content", the other to record transactions. Content server has the read only head on, the transaction server has the write only head on. Hot swap them for updates and transfer of information.

    Not as convient as it is currently done, but for a little ma/pa shop, it might be perfect.
  • Wouldn't this make the web server unable to read cache, the information could change.

    Can't you have the same effect by having the web server with read only permission to be the only externally accessible program?

    Or just mount a ro network drive, over dedicated gigabit ether it shouldn't be that big of an issue.

A complex system that works is invariably found to have evolved from a simple system that works.

Working...