New Two-Headed Hard Drive Intended To Secure Web Sites 366
dlur writes: "This article states that Scarabs (In Japanese), a Japanese company, is developing a hard drive with two heads, one read-only and another that is read/write. With this comes two cables, the read-only side going to the external web server, and the r/w cable going to an internal protected server. While this should make it quite a bit tougher for script kiddies to place their mark on a page, I doubt it will stop any real hackers from getting to a site's DB as that would still need to be r/w."
slashdot freak show (Score:4, Funny)
Re:slashdot freak show (Score:4, Funny)
Nope, no bearded lady as yet. You'll just have to make do with Lesbian Linux [linuks.mine.nu].
Ali
Re:slashdot freak show (Score:3, Insightful)
Snake Oil (Score:2, Insightful)
Re:Snake Oil (Score:2)
Re:Snake Oil (Score:4, Insightful)
Instead of saying that the sun can burn you, he told someone sitting in a dark closet that they are going to get burnt if they stay there. Still maybe not flamebait, but if you are going to type in l33t to look cool, at least read the article.
I read the article, did you? (Score:2)
I read the article and it described a system where if you have a website that serves only static content you can use this snake oil technology to prevent people from defacing your website. Why is the technology snake oil:
Re:I read the article, did you? (Score:2)
Pfttt, old news (Score:2, Funny)
More Speed? (Score:4, Interesting)
This sounds like a nice drive to use in TiVo-type units as well, so that the read head can return data as the r/w head updates the media, rather than flopping the only head back and forth.
Re:More Speed? (Score:5, Insightful)
Why not make a hard drive with two arms? They would be located 180 degrees apart from each other, so they would never bump into each other.
Each arm would be able to access the entire range of the hard drive.
One would be read-write and the other would be read-only, or both of them could be read-write if there would be no significant increase in cost.
This would be great for TiVo and ReplayTV units, which need to read large continuous amounts of data while writing large continuous amounts of different data! And it would be much quieter than the current one-arm drives, that have to thrash, making the units more appealing in a residential environment (one of the main complaints about the units is that the drives are too loud).
Considering the large quantities of drives that TiVo or ReplayTV use, is a special order out of the question? I'm sure this has been thought of before, and with a large enough order, anything is possible within reason. Western Digital made a custom drive for a large order, and found it to have such a good idea that it was officially added to their product line! (It's the larger 8MB cache in a "special edition" of their 100GB drive.)
Unfortunately this kind of drive would not work well with IDE. IDE is designed to wait for one command to complete before executing another command. So this means that the gain of being able to execute read-write commands in parallel would be neutered by this protocol. A solution is to use a SCSI drive that supports Tagged Command Queuing (TCQ)! This drive, if the controller and OS software support it, can stack up multiple commands that can be resolved in any order, as fast as the drive allows. This means that multiple outstanding commands could be sent to the drive, and the drive firmware would be free to execute them in the optimal order.
This would be a great advantage, as it could allow a slower drive to be used (less power consumption, less heat, less chance of failure). The slowness of the drive would be offset by the two arm design, making the drive effectively twice as fast. It might be even faster than that, as seek time would be reduced to almost nothing when reading or writing simultaneously from two different places!
The only disadvantage would be increased cost of having to use a SCSI drive (including controller) versus an IDE drive, and a one-time cost of having to add support for TCQ to whatever OS that is being used.
I wonder if a two-arm drive is being planned for use in ReplayTV or TiVo units? It seems like too good of an idea to pass up....
Why not make a hard drive with two arms? (Score:2)
Re:More Speed? (Score:2, Informative)
WD made the bigger buffer because it was cheap. Adding RAM isn't hard and with RAM prices its cheap. Doubling the front end is a nasty, expensive business.
Re:More Speed? (Score:2)
Why not just have the heads on the left and right side of the platter. it might make for a slightly larger drive, but you can absolutely have several arms/servos at different points on the disk. Based on the size of arms I'd say you could have up to 6 with quite a bit of breathing room.
It's not hard to imagine what a nightmare the controller design would be though...
Re:More Speed? (Score:2, Informative)
It shouldn't be too difficult to add a second arm, that wouldn't interfere with the primry R/W head. Of course it does double the chances of a head-crash... This is the way that it appears to be being done according to the web site [scarabs.com]
Still exploitable? (Score:4, Insightful)
Re:Still exploitable? (Score:2, Funny)
Re:Still exploitable? (Score:2)
Assuming that you've good a nice, current filesystem backup that you can send over the network to reimage the machine, sure. I think the same people who would jump through the hoops of setting up this dual-access harddrive are the same people who would have an existing, easy solution on hand, anyway.
Re:Still exploitable? (Score:2)
Ahh, never mind.
Write protect switch.... (Score:2)
Having a read and a read-write cable doesn't really solve anything. You really have to take the web-site down while you are updating, otherwise you need a very interesting combo of web site and file system.
Re:Write protect switch.... (Score:2)
The r/w cable is connected to the other head in the drive, the r/w head is controlled by the secure server, one that is not accessable by anyone outside the internal network (thou I'd prefer it to be the physical server).
That way there is no need to take down the site to do updates, just access the correct location from the internal network.
But.... But.....! (Score:5, Funny)
Nah, forget it.
"I mean, two heads are better than one."
Protection from defacement only, and then iffy. (Score:5, Insightful)
Re:Protection from defacement only, and then iffy. (Score:3, Interesting)
That's a nice point, however, I don't think this should have any impact on your decision wether to use this product/strategy or not.
DOS attacks are a problem that are near impossible to solve no matter what hardware you may have (even your 10's of thousands of dollars worth of Cisco routers). This product isn't targetted at DOS attacks.
Buffer overflows and whatnot are still possible.
BUT BUT BUT! They are FAR less effective. One of the problems with overflows are that they give you access to the machine. The danger is when they can login to the machine, install all their hacking tools, packet sniffers, and what not. That's where the real damage is done.
Now, if the ENTIRE hard disk on the web server is read only, and the machine that they use to make changes to the partition is on a complete seperate network (perhaps not even connected to the internet at all) this could be a VERY effective way of limiting damage done (especially if you are carefull about what applications are installed on your server to begin with).
Database attacks would be the worst, though, since, as Timothy again points out, they must be writeable.
Finally, this is not necessarilly true as well. If you run a website that provides the user with realtime information (such as stock quotes, or mortgage rates), most of the data is coming from some source internal to your company. You can easily make that database readonly for the web server, and seperate any minimal user info database into it's own read/write database thus further limiting the damage they can do. In fact, if you aren't doing this already you're probably doing something wrong. Here at work we have two seperate copies of our database (replicated in realtime). One is linked directly to our internal accounting system and updated frequently. The other is 100% read only and ALL reports are run from that.
I'm not nitpicking you, or anybody in particular. This is a GREAT option. It's not perfect yes, but if you really think about it, you can use this thing in many very very powerfull ways (and as mentioned above you can do some similar things by tweaking IDE cables and useing CD roms). Same thing can be said for Linux router distributions running off of read only 3¼ floppies or CD-Roms! :)
Bryan
Huh? (Score:5, Insightful)
Furthermore, we can already do this same thing by mounting a network file system (say) in read-only mode. Other than being funky, what's the point?
Re:Huh? (Score:3, Insightful)
Re:Huh? (Score:2)
Re:Huh? (Score:2)
Re:Huh? (Score:2)
The first thing I thought while reading this was, "Damn that's a good idea.. I can't believe I've never thought of anything close to that."
There are a lot of details, but I think as a basic block to build on for security this is a very good thing.
Re:Huh? (Score:2)
Re:Huh? (Score:2)
But if he has a clue, and the read server is in the dmz, and the write server is on the internal network, and no access is allowed from dmz -> internal then how exactly is he going to get there?
And even if he did, hacking root doesn't guarantee you a hack on other machines unless the other machines have the same password(s), or have a trust relationship with that machine that can be exploited.
Re:Huh? (Score:2)
I don't get your issue with rebooting. Why would you have to reboot to update static content? The other server still has write access, only the webserver has read only access.
Re:Huh? (Score:3, Insightful)
I actually think this device has limited, but good applications. Anyone serving up static content would be a bit safer with this technology. Of course, logging traffic and such would be a bitch, but the server would only need a reboot if someone broke in.
Re:Huh? (Score:2)
Without a syslog exploit, that machine is near impossible to break into and a great way to protect your log files. Personally I think any company doing any serious linux server work should be doing something similar!
Bryan
Re:Huh? (Score:2)
Correct, you don't need this... (Score:2)
If you're really smart, you're doing all of this on a netapp filer [netapp.com] so that the access speed is as good as or better than local-attached storage (and, yes that's true even though it sounds wacked... it's because of thier NVRAM-based journaling filesystem for which their NFS server code is hand-tuned).
No use for RDBMS-sites (Score:3, Insightful)
For static content, it sounds like a cool idea, even if they get root all they can do is view things and not touch. Of course, if that compromised boxen is attached to an internal network to your RDBMS, then they can go to hax0ring the heck out of your DB, they just have to use whatever tools you have installed on the web server.
Fundamental problems here (Score:2)
Now, the sites that are the greatest/most significant targets for hackers are the ones that store personal data on the site's users, credit card data being the most valuable. So this hard drive would be useless for the servers that need it most.
Besides, even if the above weren't the case -- for instance, a banking site that (for some reason) only allowed you to read your account data, not make any transactions online -- does read-only really prevent hacking? All it means is that the hackers can't make changes to the server data; it doesn't mean that they can't steal passwords to access that data. So this might be good for the companies that use it, but it also gives a false sense of security by providing no additional protection to me, the user.
NFS? (Score:3, Interesting)
Hey before you go out and buy one (Score:5, Insightful)
A create your website
B burn it to CD
C modify httpd.conf, document root, set to
Voila! and I didn't need to hire a team of japanese researchers to figure it out either.
Re:Hey before you go out and buy one (Score:5, Informative)
It would be simple to just flip the switch, modify your files and then switch it back when you are done so no changes can be made later.
Even better, put it on an electronic keyswitch mounted on the front of the box, and you have an effective security system for things like demo stations and kiosks.
Re:Hey before you go out and buy one (Score:2, Funny)
Re:Hey before you go out and buy one (Score:2)
A. Create website.
B. Attatch big box of eproms to burning device and burn website.(parrelel port, whatever) Unplug when done.
C. Attatch big box of eproms to webserver via ide/scsi interface
D. Very fast, and probably going to be fairly cheap to do soon.
Re:Hey before you go out and buy one (Score:3, Funny)
Neat idea, in all seriousness.
Re:Hey before you go out and buy one (Score:2)
There ya go. Now read it again.
Re:Hey before you go out and buy one (Score:2)
You just `grep -r null
Been doing this for YEARS. (Score:2)
Simple, Secure, and very easy to maintain.
Re:Been doing this for YEARS. (Score:2)
Re:Terminator? (Score:2)
Anyone else reminded of the silly scene where Arnie has to instruct his friends how to flip his neural net from R/O to R/W mode?
Do it for SPEED, not SECURITY (Score:2, Interesting)
What about the admin box? (Score:2)
The admins most likely have a network connection on their machine, and if so, that could be hacked.
Why not a hack that resides in RAM?
It doesn't seem that this would stop a determined attacker; they'd just do an end run around the tech. It does seem that this would be an excellent way to speed up harddrives in general..audio and video... ohhhh.
NFS can do the same for you (Score:2, Interesting)
NFS server exports directories with web pages to web server read-only and does not allow logins from the web server (and firewall does its best to block even attempts of such). So even if the web server is fully compromized, the web page cannot be changed.
Of course, if the web server has writeable disks of its own the cracker could make it serve a page from there instead of the real page; but the two-headed disks will have the same problem, you can only solve it by not giving the web server any writeable disks, boot it from CDROM or from the network.
What a lame-brained idea (Score:3, Insightful)
Aside from the obvious, there are much better uses for more than one head in a drive. Multiple simultaneous seeks, faster seeks, and twice the raw read rate. The market for this should be huge. Hard drive transfer rate is the bottleneck for most tasks, including boot time. All the while with less heat, power, and noise of the 7200+rpm drives.
Nasty thing to do to buffer cache (Score:5, Insightful)
The OS assumes that it, and it alone, modifies the disk, and that the disk won't change state without the OS making that change. This is one of the reasons you don't want to allow raw disk access from a VMWare or DOSemu session to a mounted file system - the emulated OS will access the disk, and the host OS's file system won't know about it. Boom! Instant corrupted file system.
In the case of this double-ended drive, the web server will assume that, since it has read the disk once, it needn't read that sector again. Then the write side computer modifies the disk, and the web server won't pick it up.
I'd rather see a disk with dual heads, and the logic to allow the system to read different sectors at the same time, all kept coherent by the drives controller as a way to increase throughput.
But to use this as a protection on a web server is just plain dumb.
Re:Nasty thing to do to buffer cache (Score:3, Funny)
good time talking about something we know absolutely
nothing about and along comes an educated person that
decides to spoil the party with a little knowledge.
Damn you and all of your technology.
Re:Nasty thing to do to buffer cache (Score:3, Insightful)
It has nothing to do with drivers. Drivers live in an island called the operating system, and if those two islands are not connected, a driver on one machine will have no clue what a driver on the other is doing - they will both think they are accessing two completely different disks. Your argument might hold against VMWare if VMWare is really juggling and managing interrupts intelligently (so the operating systems don't step on each other) - but you didn't mention anything about this, and this certainly wouldn't hold for two entirely independent machines with no shared communication mechanism (the whole point of having an insecure web server use the read-only head, while the secure internal server uses the read/write head).
"Oh, and god forbid, what about NFS and Samba? Are the machines that host the NFS/Samba shares NOT allowed to change the contents of those systems?"
I'm not sure what your point is. NFS, Samba, and virtually any other network-based sharing system uses cumbersome slow-ass large-grained locking protocols. The article was about two completely seperate machines (no shared communication mechanism like NFS/Samba/sockets/shared mem - nothing).
Log server (Score:2)
Re:Log server (Score:3, Insightful)
Why do you think a lot of logservers print to a lineprinter? :-)
Hell, I think the upper levels of the old Orange Book *required* a hardcopy of logentries, in real time.
Well it's a clever idea but... (Score:5, Insightful)
Sure, this new drive can protect existing data from destruction, but we need protection from the wrong people reading the information that's already in a website.
Been done before... (Score:2)
When you have a storage array that supports multi initiator SCSI you can connect one connection of the array to the external facing machine in read-only mode and the other connection to the internal facing machine in read-write mode.
Read/Write Servers (Score:2)
Besides, you can always make hot-swap hard drive read-only with a jumper block.
Re:Read/Write Servers (Score:2)
Actually a capital suggestion. Make a bootable CD-ROM that has your OS, drivers, and webserver. Make a ram drive for temp space, if required. Then have it mount a read-only partition, and you're aces.
This is quite common, actually. Something gets buggered up, you reboot. New patch? Update your image, burn a new CD, and you're gone.
How would this exactly ah, work.... (Score:2)
It's the same deal with a SAN (Storage Area Network). I could easily zone two physical servers into the same LUN on the SAN and make one mount r/o and the other r/w, but unless the OS has some sort of understanding that this kind of thing is going to happen (like a clustering system), I would expect some problems on the r/o mounted system.
p.s. I'm no expert, I'm just wondering logistically how this is all going to work. It doesn't make sense to me...
p.p.s. I know there is no real security in mounting a disk r/o because someone could just remount r/w, unlike the physical solution this product provides. But in either case, I would think the issues with two boxes mounting the same file system without clustering would be a problem. If it isn't, I'd love to do something similar with my SAN just for performance and load balancing purposes...
how about performance? (Score:2)
Ahem (Score:5, Funny)
Hackers will be unable to attack Web sites protected by a new security system unless they can change the laws of physics, according to Naoto Takano
I'm working on it all ready. So far I've managed to get the relativity theory down to E/2 = MC^(1.9)
And standard Earth Gravity now has a value of 8.8m/s/s.
Up.
And don't try to fill up a garbage bag anytime soon. I've been playing with volume. They're now "Garbage Bags of Holding."
Gah.. do it in software. (Score:2, Insightful)
This is so frickin' simple, the only reason this Scarabs company is even in business is because there are too many idiots running semi-important servers out there. Having your network admin'd by a clueless fuck is not something that will be solved by a piece of buzzy hardware.
Hope they try to patent it , Ive got prior art :) (Score:2)
Uses 20 Meg MFM single platter 5 1/2 drives (the tolernaces were the most forgiving, I probably had the only 486 with MFM hard drives in it
It was WAY cool though, (I had it under glass to watch it)
We took pictures, and the rejected aticle, sealed em in an envelope with a signed notarized affidavit. and had the post office postmark the flap
I was going to patent it (this is circa 1992) but I was told by many contemperaries it was the dumbest idea they had heard.
Now if I can find it watch out
Small (to nonexistant) Market Segment (Score:2)
Seeing how most web sites who would be in the market for a product like this have "advanced" sites, I would argue this product has a very small potential customer base.
-Pete
Nice doggy... (Score:4, Funny)
*yawn* (Score:3, Interesting)
*yawn*
Seems to me that a database that doing this the other way around (write only head, separate read head) would be the smart way to go, store customer data, but only trusted computers can get any of it off! (though displaying customer info might be a bit of a challenge, heh, oh well, store name and address on regular drive, store valuable information on the special drive)
How many Exp. Points do I get? (Score:4, Funny)
Re:How many Exp. Points do I get? (Score:2)
Good idea, but needs file system support (Score:2)
I once proposed it for a secure application for DOD, back when disk drives were the size of washing machines. The basic idea is to enforce one-way traffic flow. The file system on the read side has to understand how the data on disk can change, so some file system work is needed. Back then, we were more concerned about leaks from secure systems than attackes coming in, so the outside world would have had the read/write end, while the secure world had the read-only end. It turned out that unidirectional fibre optic links were more effective.
Old News (Score:3, Funny)
good for dumb MBAs / VC and idiot security staff (Score:5, Informative)
Now, we have to explain one more thing to VCs and MBAs. All they know is there is this thing called a website that exists on a thing called a webserver.
Hasn't anyone on
Has anyone on
Let me break it down for the rest of you:
This ads exactly zero extra security for a well-run website. Most well-run sites already have seperately firewall'd http-webservers and database machines. Some well-run sites have the application server on yet a third firewall'd network (or vlan etc).
Any place worth 5cents will not have valued data sitting on an httpd server!
This is really Ooooga-Boooga in a nutshell for VCs and MBAs trying to make a buck on security-scared VCs and MBAs running other companies.
I don't buy it.
Secure your site properly - as one other poster mentioned, for the less-funded (read: cheap/poor/startup/blah) company/service you can simply mount a CD-R with your site's static content on it. Even JSPs can live on a CDr (as long as they're precompiled into servlets, or there's a scratch disk for the JSP-container to compile them).
Some HDD have write protect jumpers. (Score:2)
Not sure if they do anymore, but IIRC hard drives already have or have had write protect tabs available. Write protect works just fine for floppy disks.
Multiple heads seems like it would be a massive extra expense compared to changing the firmware that doesn't really provide a whole lot of extra security.
Dual-ported drives are nothing new... (Score:2)
Of course, most of the older drives also had prominent lights and pushbuttons on the front that let you write-protect the drive, in some cases on a per-port basis.
What has often been missing is OS support for dual-ported drives; the lack of support is most conspicuous today. As a result most modern OS's trying to use a dual ported drive will have to "take its turn" having the disk mounted if there's any possiblity the other machine is going to do a write. If the OS doesn't even support the simple concepts of mount and dismount, then you probably cannot use it at all!
Prior Art? (Score:3, Funny)
Please send any patent inquires to
Cesear, Emporer of Rome
123 Pantheon Drive
Industry rejected multi-head drives long ago... (Score:5, Interesting)
"The original idea of a hard disk having two heads emerged around 1985..."
Funny that the technology hasn't been implemented after all this time... Or has it?
From the StorageReview.com reference section:
"Such hard disks have been built. Conner Peripherals, which was an innovator in the hard disk field in the late 1980s and early 1990s (they later went bankrupt and their product line and technology were purchased by Seagate) had a drive model called the Chinook that had two complete head-actuator assemblies: two sets of heads, sliders and arms and two actuators. They also duplicated the control circuitry to allow them to run independently. For its time, this drive was a great performer. But the drive never gained wide acceptance, and the design was dropped. Nobody to my knowledge has tried to repeat the experiment in the last several years.
There are several reasons why it is not practical to make a drive with more than one actuator. Some are technical; for starters, it is very difficult to engineer. Having multiple arms moving around on a platter makes the design complex, especially in small form factors. There are more issues related to thermal expansion and contraction. The heat generated inside the hard drive is increased. The logic required to coordinate and optimize the seeks going on with the two sets of heads requires a great deal of work. And with hard disk designs and materials changing so quickly, this work would have to be re-done fairly often.
However, the biggest reasons why multiple actuators designs aren't practical are related to marketing. The added expense in writing specialty electronics and duplicating most of the internal control components in the drive would make it very expensive, and most people just don't care enough about performance to pay the difference. Hard disks are complex technology that can only be manufactured economically if they are mass-produced, and the market for those who would appreciate the extra actuators isn't large enough to amortize the development costs inherent in these fancy designs. It makes more sense instead to standardize on mass-produced drives with a single actuator stack, and build RAID arrays from these for those who need the added performance. Compare a single 36 GB drive to an array of four 9 GB drives: in effect, the array is a 36 GB drive with four sets of everything. It would in most cases yield performance and reliability superior to a single 36 GB drive with four actuators, and can be made from standard components without special engineering."
So, from the looks of things, it would be easier and cheaper to use single-head drives in easy-to-put-together configurations than put two heads in the same drive. Admittedly, the StorgeReview.com reference's author didn't mention setting up a read-only/read-write scheme, but the logic still works. I'd guess that it would still be easier to make a RAID container that provides read-only access on one channel and read-write on another.
Again, from the article:
"Scarabs is also working on a different version of the technology--instead of putting two heads on a hard disk, the company is connecting two SCSI interface circuits to a conventional hard disk with one head, one set to send read-only electronic signals and the other to send read/write signals."
This company already knows that their gimmick drive won't sell. No one will buy an over-priced drive with higher probability of failure over a (comparatively) cheap SCSI trick that requires no extra moving parts.
Why not use shared storage? (Score:2)
Two Headed Hard Drive? (Score:2)
Tsk tsk... Timothy!
Sort of related - more heads for performance ? (Score:2)
I know on a modern disk the tracks are too tightly packed to do a head-per-track, but was wondering if you had (say) 2 heads on a single arm seperated by a third the width of the disk, then any track could be read with a much smaller movement (compared to full disk seeks) by seeking with the closest head, and when queuing up reads for an "elevator algorithm" of seeks you could also get performance gains by grabbing out of order data with the "trailing" head.
I realise the price goes up with complexity, and the heavier head might take longer to settle, but was wondering if this wouldn't give better performance for scattered reads for those who need it (eg servers) and don't mind paying....
Now I'm a software geek, not a hardware bod, so does anyone know why this isn't done ? (I can guess lots of reasons myself, thanks). Is it effectively just RAID striping on a single disk ?
And how about more heads (5 across, 10 across...), or 2 sets of heads on opposite sides of the disk to cut rotational latency in half (if kept in step) or
--
T
Content checking (Score:2)
I've seen a couple of projects on freshmeat that do this. Basically, a daemon sits around and watches files and if they change, they do something about it. This could be anything from logging to sounding an alarm to replacing the content.
I could have a repository sitting offline storing all of my content (or even everything... OS, databases, scripts, tools, etc etc) and have it "log in" to the servers from the inside and check everything for changes periodically. In a lot of cases, tests could be done from the outside as well (web content specifically). That machine, though physically connected, would simply shut off its interfaces and block everything unless it was doing its work.
I think a recent website hack occurred at USA Today... such a scheme could have caught the hack within minutes and even have replaced the forged content with whatever was supposed to be there.
how about this? (Score:2)
I've been hearing a lot of people say "clip pin 23 to your IDE cable" to prevent writing.
Would it be difficult for a company to come up with a "plug between" adapter between the harddrive and the IDE cable? maybe it would have a jumper on it that you could remove, or better yet, plug in an extension cable with a switch onto the jumper location so you wouldn't have to open the case every time a change is made. If there was enough of a demand, these could be manufactured cheaper than IDE cables.
I think it could be a much cheaper solution to the folks that don't need top of the line. Then again, Mounting the filesystem "read only" would be even easier.
Truely secure "e-business" (Score:2)
Not as convient as it is currently done, but for a little ma/pa shop, it might be perfect.
Obsolete Read Cache (Score:2)
Can't you have the same effect by having the web server with read only permission to be the only externally accessible program?
Or just mount a ro network drive, over dedicated gigabit ether it shouldn't be that big of an issue.
Re: (Score:2)
Re:What would be the input route? (Score:2, Interesting)
But then, of course, I'm no expert with these drives and there may be other factors which I am overlooking.
Re:What would be the input route? (Score:2)
Re:What would be the input route? (Score:2)
I think it's set up so both heads access the same data, just whatever is using the read-only head can't modify anything.
Not sure if it would be best to have two separate computers access opposite sides, or have one use the drive as two parts. Probably the first.
--
You are the weakest link! BLEARGH!
Re:What would be the input route? (Score:2, Insightful)
Re:What would be the input route? (Score:2)
Re:How are cookies (session data) going to be stor (Score:2)
Re:How are cookies (session data) going to be stor (Score:2)
Re:How are cookies (session data) going to be stor (Score:2)
Re:Read head, read/write head (Score:2)
Not for nothin, but I didn't need write access to google to get a dynamic page. As long as the user doesn't have to _ENTER_ any data that needs to be stored locally, this system works just fine.
This is _NOT_ a solution for ebanking, etrading, managing personal accounts and the like (where data needs to be WRITTEN back). For the majority of dynamic web pages, this works just fine.
(ok ok ok yes there is still the buffer overrun issue -- and yes your url's get messy...but at least they dont have actual _write_ access to the hard drive...as someone else already said...a quick reboot and any _possible_ -- not even probable -- damage is gone.)
Re:Sounds like a good idea to me (Score:2)
Again, works great for static web sites, but doesn't help much for dynamic.
Re:You don't need 2 physical heads to do this... (Score:2)
I personally think this is useless for other reasons, completely redundant with networked strategies to do the same thing, at least with competent administrators. However, I guess this thing would be good for places with less-than-stellar administration. Then, they could have the web server's only connection to the data through that safe harddrive cable. If an organization is just serving up static data, then they probably aren't sophisticated enough of an organization to afford stellar admin people, so I guess this thing has a place.... No where near me though.
Re:What happens if one of the heads dies? (Score:2)
Re:About the same as running it off of a CD-ROM... (Score:2)
Create a webserver that has a RAM disk big enough to hold the site. Then, at boot, dump all the contents from the CD-ROM over to the RAM Disk. Then, periodically check a few things in RAM:
- # of files served vs. # of files on the CD
- Dates modified
- Significant changes in file size
- Maybe a file comparison on a random file here and there
- Refresh the RAMdisk with what's on the CD-ROM at regular intervals like every hour.
That idea's not as well developed as I'd like, but it's food for thought.