Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Security

Saying 'No' to an Executable Internet 306 306

Dylan Knight Rogers writes "Applications are constantly being ported for usage on the Internet - either for a viable escape from expensive software, or because it's often helpful to have an app that you can access from anywhere. Operating systems that run from the Web will be a different story."
This discussion has been archived. No new comments can be posted.

Saying 'No' to an Executable Internet

Comments Filter:
  • by Anonymous Coward on Sunday February 12, 2006 @10:48AM (#14699694)
    What a submission! This is Sunday morning on Slashdot at its finest.
  • Huh? (Score:5, Informative)

    by Otter (3800) on Sunday February 12, 2006 @11:01AM (#14699752) Journal
    This reads like the author took twelve completely unrelated +3 comments from Slashdot articles and stuck them together.

    Basically, his point is that Lunix rulz and Microsoft is teh sux and such will continue to be the case with AJAX apps. That doesn't make sense even if you concede all the author's idiotic premises.

  • by msobkow (48369) on Sunday February 12, 2006 @11:12AM (#14699804) Homepage Journal

    The very first iteration of what eventually became Unix was a simple task switcher to allow a game to run at the same time as actual work. Technically it wasn't multi-user, because there was only a system console.

  • Re:errrr.... (Score:5, Informative)

    by Zeinfeld (263942) on Sunday February 12, 2006 @11:17AM (#14699834) Homepage
    UNIX was first implemented seriously on the PDP-11/20, which is best classed as a minicomputer. And while the system did indeed use terminals of a sort, they were dumb terminals. It's really not any different than how the keyboard, mouse and monitor are connected to your PC now.

    It would have been quite a trick to design an operating system based on the principles of the network protocols later developed on it.

    That said, the dumb terminal to mainframe concept was a big part of the UNIX legacy. UNIX was designed from the start as a multi-user environment for the individual user. The kernel supported multiple users but the tasks it was designed for were single user tasks, mostly programming. UNIX was a reaction against mainframe computing of its day.

    The author is completely wrong when he says that Windows did not have any security until 2000. Windows NT was designed from the outset to obtain Orange book B2 certification. It would take a huge amount of work to get Linux to meet that criteria. It is generally considered to be 'B2 equivalent' but thats like saying that being ABD is the same thing as having a Phd, the only people who say that are ABD grad students.

    Likewise the author is completely wrong about Microsoft being likely to take the O/S in that direction. Unix and VMS led the minicomputer revolution. Gates led the microcomputer revolution which was even more against the central processing store model of computing. If you look at all the early microcomputers you will find that they all ran Microsoft Basic. When IBM went to Microsoft while it was building the PC it was the BASIC they wanted. They only demanded a bootstrap loader when Kildal refused to deal with them for CPM.

    The company that tried to make the network the operating system was Netscape. They failed for several reasons, the most important of which was you can't hire 5000 world class engineers in a year and even if you could that you would not end up with a world class team. MarcA's policy of never hiring anyone he thought might be smarter than him didn't help either.

    The company that seems to be making the attempt now is Google. They might make it, at this point it is unclear.

  • 404 - Page not found (Score:2, Informative)

    by limegreen (516173) on Sunday February 12, 2006 @11:44AM (#14699955) Homepage Journal
    Judging by all the negative comments, the flaming article has been pulled.
  • by Gobelet (892738) on Sunday February 12, 2006 @11:53AM (#14700007)
  • Re:Dumb Idea? (Score:2, Informative)

    by croddy (659025) on Sunday February 12, 2006 @01:52PM (#14700551)
  • by Evro (18923) <evandhoffman.gmail@com> on Sunday February 12, 2006 @02:10PM (#14700620) Homepage Journal
    According to these pages: http://www.osnews.com/user.php?uid=2668 [osnews.com] , http://jenett.org/ageless/1990s/ [jenett.org] Dylan Knight Rogers is 16 years old. That would explain many of the criticisms in this thread. Both his site and his "blog" are now giving 404 errors so I can't even read the article myself.
  • Re:Dumb Idea? (Score:5, Informative)

    by merreborn (853723) * on Sunday February 12, 2006 @03:15PM (#14700880) Journal
    I proposed the same idea to my father when I was in highschool. The thing is internet latency is very, very high compared to the latency involved in hitting your own processor/memory. This ends up severely limiting the type of applications you can run in this sort of setting.

    Botnets are an interesting example of this sort of computing, though. In fact, botnets are the closest thing we have to this sort of idea being implemented right now.

    Anyway, the point is that real time applications such as gaming wouldn't really see much benefit from this. By the time someone else could execute part of your processing, and send the result back to you, you character is already a foot from where you were when you requested the work, and the old work is now completely irrelevant. Even more, I can't think of a single use for GPUs that *isn't* realtime -- distributed GPU use over the net is almost certainly 100% impractical. It's not uncommon for gamers to play at and above 100 FPS -- that leaves your system 10 milliseconds to render every frame; you can hardly ping someone a block away in that time -- severely limiting the number of computers available to your 'cluster'. Also latency is NOT garanteed on the net, much less successful, in order delivery.

    It works for apps like SETI@home because seti just sends you a chunk of work every few minutes or hours, and doesn't particularly care if and when you finish it. There's no 10 ms deadline on SETI -- the project will finish when it finishes.

    Internet wide cluster computing is most suitable for applications that are primarily about converting a very large input (years of SETI data, protein folding data, massive mailing lists for bot nets) into very large output (analyzed data, folded proteins, spam) over a long, unpredictable period of time.
  • Re:errrr.... (Score:3, Informative)

    by mallardtheduck (760315) <stuartbrockman@@@hotmail...com> on Sunday February 12, 2006 @03:19PM (#14700901)
    Likewise the author is completely wrong about Microsoft being likely to take the O/S in that direction. Unix and VMS led the minicomputer revolution. Gates led the microcomputer revolution which was even more against the central processing store model of computing. If you look at all the early microcomputers you will find that they all ran Microsoft Basic. When IBM went to Microsoft while it was building the PC it was the BASIC they wanted. They only demanded a bootstrap loader when Kildal refused to deal with them for CPM.

    Sooo many misconceptions...

    Microsoft BASIC was one of many. There was no clear leader.
    Kildall's DRI was the "Microsoft" of the early (8-bit) micros, CP/M was the prevailant OS, mainly because Lotus 123 and many other business apps ran on it.
    IBM shafted DRI, not the other way around.
  • Re:errrr.... (Score:4, Informative)

    by pthisis (27352) on Sunday February 12, 2006 @03:23PM (#14700925) Homepage Journal
    Windows NT was designed from the obtain to have Orange book B2 certification

    This is true, but:

    1. Windows NT was only certified B2 secure when not connected to a network.

    2. Orange book isn't related to the type of security we're talking about; the certification says nothing about whether there are bugs in the system allowing remote attacks or even local privilege escalations. It only talks about how the system is nominally designed, and even there it's more about logging who does what on the system and forbidding things like copying and pasting between applications running at different security levels.
  • Re:errrr.... (Score:2, Informative)

    by mophab (137737) on Sunday February 12, 2006 @03:37PM (#14700977)
    A few issues with your mostly correct posting.

    One is that there is a HUGE difference between Linux and Unix and the original poster said Unix. The design of networking and Unix was an iterative process, and the first versions of Unix had only a part of what is currently called Unix. So what people currently called Unix was designed around networking.

    As far as network operating systems, it was Sun that first had the motto "The Network is the Computer." And this was for their Unix system, long before Mosaic, and the first HTTP RFC.

    There were several Unix implementations that achieved Orange book B2 compliance long before anything Microsoft produced did. Furthermore, Micro$oft is likely to take the OS in any direction that will make them money. They seem to like the idea that people have to pay for a network service on a time/subscription basis, whereas they usually buy an OS for a given piece of hardware once.
  • by shmlco (594907) on Sunday February 12, 2006 @03:55PM (#14701053) Homepage
    The author apparently pulled his own work out of embarassment, and understandably so. The article was a badly written opinion piece flaming Microsoft and praising Linux. Imagine that. Almost no mention of the "Executable Internet" at all.

    Reading it's a waste of time, but here's the mirror [mirrordot.org] for those interested.

  • Re:rm -rf /../* (Score:4, Informative)

    by Anonymous Coward on Sunday February 12, 2006 @04:16PM (#14701138)
    In case your question is serious, the root dir parents itself.

Imagination is more important than knowledge. -- Albert Einstein

Working...