Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security

Web Browser Developers Work Together on Security 203

JRiddell writes "Security developers for the four major browsers recently met together to discuss Web security. The meeting, hosted by Konqueror's George Staikos, looked at future plans to combat the security risks posed by phishing, ageing encryption ciphers and inconsistent SSL Certificate practise. IE 7 is one of the first browsers to implement some of the ideas discussed such as colour coding location bars and an anti-phishing database." From the article: "The first topic and the easiest to agree upon is the weakening state of current crypto standards. With the availability of bot nets and massively distributed computing, current encryption standards are showing their age. Prompted by Opera, we are moving towards the removal of SSLv2 from our browsers. IE will disable SSLv2 in version 7 and it has been completely removed in the KDE 4 source tree already."
This discussion has been archived. No new comments can be posted.

Web Browser Developers Work Together on Security

Comments Filter:
  • by LostCluster ( 625375 ) * on Tuesday November 22, 2005 @05:25PM (#14095536)
    I've seen several site operators let their sites sit with SSL warning boxes because they insist on using a self-issued SSL certificate instead of paying for a major brand name label.

    Most of the time, this isn't exposed to customers, but employees of the organization are trained to ignore the "This certificate was not issued by a trusted authority," warnings, and I fear such people will take away that that box with all of its technobabble is one they should ignore at all times. That box is a last line of defense against an encrypted connection that isn't trustworthy... and I think this is a step forward to the point where browsers will refuse to give SSL encryption without SSL authentication succeeding.
    • by smclean ( 521851 ) on Tuesday November 22, 2005 @05:35PM (#14095657) Homepage
      What would be nice, is to see browsers handle this the same way SSH does with host key checking; when you first connect to the site, you get the pop-up about the self-signed certificate, and you accept it permanently. Then you connect to what you think is the site the next day, but instead of the real site you get a malicious impersonator of the site, and its cert is different. Rather than getting a new pop-up about the new self-signed cert that looks identical to the pop-up of the old one, there should be a warning that the cert had unexpectedly changed, in a similar panic fashion to SSH's output when the host key changes, so it really gets some attention.
      • But how do you know that you didn't get the hacker site on day 1 and the real site on day 2? Without some authentication protocol being followed, you're not secure. Sure, there's no way you're being intercepted when you're talking to the site, but you don't know what's on the other end of the line.
        • by smclean ( 521851 ) on Tuesday November 22, 2005 @05:51PM (#14095798) Homepage
          True, but I'm not trying to say that using self-signed certs offers security to compare to certs signed by real CAs. I'm just pointing out that the behavoir of the self-signed cert popups in browsers is lacking, and could learn from SSH.

          Self-signed certificates can be very useful for a situation where you want *more* security than plain unencrypted HTTP, but don't want to pay money for it. If you wanted to have SSL encryption on a LAN, but the server's hostname is not a real hostname on the internet, I don't think you even *could* get a real CA-signed cert for it. Self-signed certs fill a real void when it's not possible to simply use real CA-signed certs. We can't just ignore that.

          • At least in Firefox it is possible to accept the cert permanently. And it does come with a fingerprint of the cert which could be verified by another secure method.

            I'm not sure how this is worse than SSH.
        • The same way you do with SSH or PGP. You verify the fingerprint, which you received by some other channel secure enough for your purposes. That could be simply over the phone from someone whose CallerID and voice you recognize or could be a trusted courier with a locked case. It could even be IM if you're just testing how to set up SSL certs and don't really care if this one is secure or not since you're going to wipe it for a real one later.

          People have been doing it for years.

          It's not a good general purp
          • It's not just about outsourcing key management, it's about trust. If you trust the root server, then you can trust anything signed by them. If the key isn't signed, it could be anyone doing location bar trickery and man-in-the-middle whatnot.
            • If you trust the root server, then you can trust anything signed by them

              Not necessarily (see GPG webs of trust). But sure, if I trust a root server as an introducer then I trust anything they sign.

              That still doesn't change the fact that if even if I trust (say) Verisign, I have to pay them to sign a cert for me. Whereas I can trust myself, sign my certs for internal use for free, and verify fingerprints as I connect. Even for some semipublic applications it could be a reasonable course of action to dist
            • If the key isn't signed, it could be anyone doing location bar trickery and man-in-the-middle whatnot.

              Also, this is impossible in the scenario I outlined unless the attacker can create valid keys matching your fingerprints (in which case your cryptographic hash function is inadequate and there are likely to be much more effective attacks at his disposal independent of whether you're using keys signed by a CA or not).
      • Er, you can do that in most browsers. In OSX, you can add the certificate to your Keychain to avoid being prompted about it in Safari or Mail; in Firefox, can't you hit "don't warn me again"?

        Certainly, it's just a component of the browser, and some support this feature already.
        • But the point is that, in firefox at least, you are not warned if the certificate in question has changed. Instead you are merely presented with the new certificate. I was pointing out that SSH actually throws up a giant warning and prevents you from connecting to a site if their host key has changed, and that it would be nice if firefox would show some kind of warning that the cert has actually changed from a previously recognized cert, rather than just telling you that a new unrecognized cert was encoun
    • The organization really should add their certificate to the accepted list of certificates in their standard system image. That way their employees wouldn't be forced to endure the constant interruptions generated by their self-signed certificate. However, for an in-house site, it's probably overkill to buy an e-commerce SSL certificate from e.g. Verisign, which can cost hundreds of dollars per year to maintain.
    • Most of the time, this isn't exposed to customers, but employees of the organization are trained to ignore the "This certificate was not issued by a trusted authority," warnings, and I fear such people will take away that that box with all of its technobabble is one they should ignore at all times. That box is a last line of defense against an encrypted connection that isn't trustworthy...

      Self-signed certs aren't the problem, and you shouldn't train users to ignore warning boxes either. If you're goin

      • I think currently this is simply a non-issue. I'm certainly not an expert, but I've never heard of a root certificate compromise being used, in the wild, for phishing attacks.

        And for good reason; it's simply not necessary. Users don't notice if the site has SSL or if it's at the wrong URL; why bother with faking a SSL cert and poisoning a DNS cache when you can just get one for russianhacker.com and send spam telling people to visit you at that site?
    • by ajs ( 35943 ) <ajs@ajs . c om> on Tuesday November 22, 2005 @05:54PM (#14095830) Homepage Journal
      The conflation of authentication and encryption is the bane of SSL and all SSL-based applications. The two really should be separate. Encryption buys you a certain set of guarantees and leaves you with a certain set of exposures that you already had.

      In those cases where that is sufficient, the introduction of authentication only muddies the overall value and importance of clean authentication. For example, I use TLS for SMTP mail delivery, but with a self-signed cert. This is because I don't particularly care about being intercepted, only that the casual sniffer of traffic between us will get nothing. For anything more sensitive, I don't trust SMTP anyway, no matter how encrypted and authenticated it might be.

      The same goes for LDAP. I tried to set up LDAP between my home and work for the purpose of sharing some contact info. I wanted to encrypt and filter traffic so that only I could access it, but didn't really care about it so strongly that I was willing to buy a cert. However, I still had to hack the client to accept the self-signed cert. Why? What possible value to the user (me) is there in that?
      • The conflation of authentication and encryption is the bane of SSL and all SSL-based applications. The two really should be separate.

        The problem is that encryption without authentication is really not secure as you'd be vulnerable to a man in the middle attack. Even in the examples you described, a man in the middle could present you with a self-signed certificate, and if you just click "yes" to accept a self-signed cert all the time, you possibly wouldn't notice, unless you routinely check the key fing

      • I agree completely. Consider how many transactions happen over the internet with a web site you don't know at all:

        1. You search for Product A - Google returns several relevant companies that sell Product A.
        2. You browse through each of those companies, comparison shopping.
        3. You decide to purchase from Company G (for whatever reason - you may or may not have heard of them).
        4. You notice a little lock icon and feel secure in your knowledge that your transaction is 'secure'.

        Note that the purpose of a certificate

    • How many SSL certificates have you obtained? They are almost worthless as a real means of authentication. Give me access to a webmaster's computer for 5 minutes while he's out on break, and I can have a certificate for his domain in my control and be out the door. Authentication is laughable if the companies that are supposed to be the insurers of authentication do not fully authenticate the identity for themselves. That little padlock only means party C (the CA) tells party A (joe dumbass) that party B
    • What are IE7, Konq, FF and other next gen web browsers doing to stop self-signed certs?

      A screen full of technobabble isn't enough. A warning that the site is suspicious, as used for other dodgy sites, is better.
    • and I think this is a step forward to the point where browsers will refuse to give SSL encryption without SSL authentication succeeding.

      That would be bad. My people run home webservers for their own personal use. Using authentication is neccessary if, like me, you have a large mp3 collection on the web. If you use authentication, then you really ought to use encryption. I've looked into getting a proper cert, but it's extremely expensive and a recurring fee. It's just not worth it. A lot of people use SSL

    • Unfortunately for those of us who aren't large corporations, it remains very difficult or expensive to get a non-self-signed certificate. I run a personal free hosting service for friends and family and even the cheap SSL certs are just another expense on top of the cost of colocation, maintainence and bandwidth. Obviously my users can't complain for the price they pay but it would sure be nice to have a "real" certificate.

      CACert is a start, but unfortunately at this point in time no browsers include thei
  • Would someone mind explaining the removal/disabling of SSLv2? More importantly, what's slated to be used in place of it?
  • Replacement. (Score:5, Informative)

    by jimmyhat3939 ( 931746 ) on Tuesday November 22, 2005 @05:27PM (#14095559) Homepage
    In case anyone's curious, here [eucybervote.org] is a description of the problems with SSLv2, including some info about the newer v3 stuff.
  • by Godeke ( 32895 ) * on Tuesday November 22, 2005 @05:28PM (#14095563)
    Stop coding in C/C++ when the product will be exposed to external, uncontrolled inputs. Java, .NET, Parrot... I don't really care what gets used, but it has been clear that despite the constant "C++ using the proper string libraries is as secure as virtual machines and interpreters" cries that those who actually wield the language to make products like browsers are still failing to secure against the most basic and common flaw: the buffer overflow. Browsing web pages is *not* the kind of thing that requires "bare to the metal" coding. Yes, such a browser might be vulnerable to attacks on the virtual machine itself... but a quick look at the browsers security history verses virtual machine security histories makes it clear that is a tradeoff worth making.
    • With the NX bit on modern CPUs this becomes less of a problem as you can't execute code from overflowed buffers.
    • by KiltedKnight ( 171132 ) * on Tuesday November 22, 2005 @05:44PM (#14095735) Homepage Journal
      Yes, such a browser might be vulnerable to attacks on the virtual machine itself... but a quick look at the browsers security history verses virtual machine security histories makes it clear that is a tradeoff worth making.

      Actually, the trade-off you'll be making is more like execution speed and resource usage for apparent safety in terms of lack of buffer overflows.

      This is not a good trade-off to make. Experienced programmers working with C and C++ will know of the buffer overflow issues, especially if they've been bitten by it before. A similar one is failure to null out a string before using it, risking problems when the string you want to put in the variable is not null-terminated.

      Basically, if you remember to do a few simple things (fgets() instead of gets(), strncpy() instead of strcpy(), memset(), just to name a few), you can actually avoid a lot of these issues. Make these things habits, and it will not become an issue.

      • There is in general no reason to use C strings at all in C++, except where legacy interfaces demand them. Use std::string instead.
      • by Godeke ( 32895 ) * on Tuesday November 22, 2005 @06:10PM (#14095969)
        This is not a good trade-off to make. Experienced programmers working with C and C++ will know of the buffer overflow issues, especially if they've been bitten by it before. A similar one is failure to null out a string before using it, risking problems when the string you want to put in the variable is not null-terminated.


        Any explanation as to *why* this isn't actually being done then? Because, as I stated, people keep *saying* this as if repeating it makes it true. Yet the reality in the field is that buffer overflows from C/C++ code is the number one source of security flaws. This claim is like saying that "people would die of fewer heart attacks if they would eat healthy foods". Um, yeah... sadly not many actually eat healthy. Clearly, not many "experienced programmers" are putting your advice to practice either. So I will take code bloat and speed hits for the sake of not being a subscriber to the buffer overflow of the month club.
        • Suit yourself. Perhaps you'd also like to roll around town in a deluxe bubble in case you might catch an illness.

          Jesus christ you people irritate me.
        • by cnettel ( 836611 ) on Tuesday November 22, 2005 @06:37PM (#14096282)
          We have to observe a few things:

          1. There is a huge "backlog" of sloppy coding that is either exposed through changes in higher layers, or simply not discovered until now.

          2. Many of the web browser vulnerabilities lately (and historically, in IE especially) have not been related to overflowing a buffer. They have more been along the lines of fooling the browser or the user of it that you are in a different security context than you really are. That is possible to do in any language. It just takes a single instance of a piece of code doing something "on behalf of" something with a lower security privilege, like just about anything done in a browser. There are techniques for sandboxing and walling this in, but enforcing something like the logic for when to allow scripting/DOM access between frames in a web browser is not something very well suited to the Java (or .NET, for that matter) security model. You simply have to do the hard work and do it right.

          So, in the specific space of browsers, I think that the issue of the language used is not very relevant. What IS relevant is to use a sound design, where the security decisions are made by some components, not all over the place. Componentization, no matter if it's done by XUL/Javascript or by encapsulation into COM/ActiveX are both examples of this. In practice, the execution of the previous have been better than the latter.

          Another point would be that moving towards Java or some other VM with interoperability issues, at least when you get into directly calling other code in-process, will force you to rewrite bad C/C++ code. I don't know if that's a bug or a feature. It would rule out buffer overflows, but it would also mean a gigantic, untested, new code base.

      • As the speed of computers and VMs grow the resource issue will fade away.

        I am not saying this will happen soon, but when you purchase a home PC from Dell and it comes with a base configuraton of a 64bit processor and a 2gig mem chip I doubt the cost of even the slow Java VM would make much of a difference to the avg user.

        C will probably never die though, what else are we susposed to write those OSs and VMs with? :)

        • I tried to install Java on my computer. I gave up when I discovered that Sun won't let me install it directly. I have to make special effort to agree to their license. FreeBSD-ports cannot include it directly. I can deal with it, but it isn't worth the bother.

          However things get worse when you are not a personal user. At work we are interested in an open-source project written in Java, but because of the license we cannot use it. (We want to ship it as part of an embedded system, the only way to i

      • fgets() instead of gets(), strncpy() instead of strcpy(), memset(), just to name a few)

        What gets me is, why are these known "gotcha"s allowed to continue to draw breath? As soon as the vulnerability is discovered, it should not get past any new release of a compiler, no matter what warning level. To heck with backwards compatibility: if my code uses a known vulnerability, it is broken and I should fix it.

    • A bad programmer can be equally incompetant in any language.

      A few security holes I've found:

      A system where you can gamble online credits, you bet n credits, and a number between 1 and 5 was generated, if you guessed the number, you would win 4 times your bet, otherwise you lose your bet, maximum winnings of 100 credits a day. I bet -1000000000 credits, so when I lost I gained 4000000000 credits. (which errored out and dumped me to a command prompt, from which I could read/edit the password file)
      • well, I see Slashdot knows how to strip HTML tags =) <SCRIPT> tags specifically.
      • by Godeke ( 32895 ) *

        A bad programmer can be equally incompetant in any language.

        And you think these guys would have done *better* in C/C++? Surely a bad coder can wreck any project. However, Java or C# allow a *competent* programmer to avoid *by default* many pitfalls that a C/C++ programmer must remain on guard for. C/C++ has its use, but I believe it is selected for projects where it isn't a requirement to have low level access to the OS and memory management.

        if (loser) { credits -= bet }

        where bet has not been bounds check

  • Is to not have the[a[ web browser interfaced with kernel/operating system. A stand-alone application browser (a la K-Meleon, Firefox, etc.) will immediately stop the devs having to worry about other security overheads (reference IE that is built in (badly) to handle all sorts of stuff that it shouldn't even touch).
  • "IE 7 is one of the first browsers to implement some of the ideas discussed such as colour coding location bars"

    I like how this person uses "one of the first" in a positive sense.
  • by mustafap ( 452510 ) on Tuesday November 22, 2005 @05:30PM (#14095595) Homepage
    It's nice to see Microsoft participating in the event. I was surprised; I didn't think they sat round tables with open source developers. Does this happen in other areas of development?
    • You might want to read about Blue Hat [com.com]--in recent years, MS has made a strong effort to make closer ties with (and, hopefully, learn something from) independent security researchers.

      It's not quite the same as meeting with open source projects, but it's a start.
    • It does sound counterintuitive at first, but when you think about it, Microsoft doesn't make any money off IE, so working together with the other browser developers is a good way to ensure all Windows browsers get better security. Helping Linux browsers to improve doesn't really matter, because Linux already has an image of being extremely secure, so collaborating with open-source developers is a win-win situation from a PR and development perspective.
  • by c_fel ( 927677 ) on Tuesday November 22, 2005 @05:31PM (#14095605) Homepage
    I see on the screenshots that IE7 is gonna use a yellow location bar to indicate a suspicious web site. Ironically, in Firefox, that same color indicates a secured site. I'm sure somebody will be fooled someday...
    • to me, yellow is almost orange or on the way to red, whereas green to me says secure.. I think IE is on the right track and firefox is the one that needs to change.
      • by Ark42 ( 522144 ) <slashdot@morpheu s s o f t w a r e . net> on Tuesday November 22, 2005 @06:41PM (#14096309) Homepage
        Firefox has had the yellow=secure for quite a while, and IE7 is not yet out. Obviously it is IE that needs to change then. The yellow color comes from the yellow/gold lock icon that almost all browsers display someplace unnoticable usually. Now the golden lock is displayed in the location bar on the right hand side in both Opera and Firefox, and the background color is yellow in both of them. Firefox has the entire location bar yellow, while Opera has a yellow outlined and yellow shaded box with the lock icon and the name the certificate is listed under.
        Clearly yellow (gold) is the de facto standard for "secure" and IE7 is just plain wrong to use green instead, and make gold mean something bad.
        • by Klivian ( 850755 ) on Tuesday November 22, 2005 @08:16PM (#14096964)
          Firefox has had the yellow=secure for quite a while,

          The same for Konqueror, but it does not really mater that much. In this case the IE7 approach makes more sense, so they agree to change it. Besides calling yellow the de-facto standard is not correct, as de-facto would be what IE5 and IE6 uses.
        • Unfortunately I can't find it right now, but I was recently reading something on Jacob Nielsen's use it [useit.com] about how yellow was a good attention-getting color. (it was something to do with in-page popups, and that yellow was the best background color).

          Anyway, I'm guessing that is what the FF people were thinking when they first implemented it- basically that yellow is pretty well standardized as "look at me!" colors. However, after having a rational discussion with anybody in their right might, you should be
    • by LostCluster ( 625375 ) * on Tuesday November 22, 2005 @05:43PM (#14095722)
      Which is why they held this meeting in the first place. Everybody's got to agree on little things like color schemes for there to be cross-browser compatibility.
    • I think you're on to something bigger there.

      The colour coding implies that colour x means safe. What happens when the ability to display colour x is compromised?

      I can imagine the average user now:
      "Well, the site is green after all, so it must be safe."

      Having computers make judgements is a serious problem in general, but especially in security situations. I know the best method of showing the user everything that is known and letting them make a decision for themselves doesn't work very well in the field.
      • Yes, but they need to inform the user in some way.

        What if they instead used a popup message? A hack could disable the popup, or change the message.

        An icon? The icon could be changed by a hack too.

        Since I think we've seen no special browser exploits this way recently besides the Mozilla XUL skin exploit, I don't think this is such a big deal, especially for browsers not implementing online installable skins.
    • Ironically, in Firefox, that same color indicates a secured site.

      More importantly, it has for something like a year and a half; same with Camino (uses different code to do it, didn't get it automatically from the Fx update that introduced it).

      Memo to submitter: when "one of the first" means "fourth or fifth in a field of about six", you need to find a different phrase, or stop accepting paychecks with Ballmer's signature on them.

      p
  • by tcopeland ( 32225 ) * <tom@NoSPaM.thomasleecopeland.com> on Tuesday November 22, 2005 @05:36PM (#14095667) Homepage
    ...developers need to be aware of how to write secure server-side code. Joseph Hemler's book Network Security Tools [amazon.com] has a chapter about finding security flaws with static analysis tools like PMD [sf.net].
  • Phishing (Score:2, Insightful)

    by Anonymous Coward
    Can we find a better name then phishing? Most people don't get it, and wave it off as just another over complicated word that people who think they are smart use. They will ignoring an anti-phishing filter because they just don't know it is.

    We need a none geek term for this, something that is clear and easily understandable. "Malicious Websites" or an "Identity Theft Filter" just not phishing.
    • We need a none geek term for this, something that is clear and easily understandable.

      Hay, You're absolutely right! And I also think that world hunger is caused by a nutritional deficit awareness gap, in which the adequate expectations paradigm failed to be impacted by the proactivity focused information enabling solution.

  • by dada21 ( 163177 ) * <adam.dada@gmail.com> on Tuesday November 22, 2005 @05:38PM (#14095686) Homepage Journal
    I'm happy to see that we're looking at an important part of a free competitive market: voluntary cooperation for better competitive products.

    The security enhancements we'll see that come out of these (and future) discussions will help all users yet also increase competitiveness in other areas. We didn't need a Congress or government body to force regulations, they're occurring out of customer need.

    Note that government could create regulations but we all know that those regulations come too late and can never adapt to current and future ever-changing needs.

    I read a great article [lewrockwell.com] today about the historical growth of the Net because of the lack of regulations and taxes.
  • Confusion (Score:5, Interesting)

    by fishybell ( 516991 ) <fishybell@hCOMMAotmail.com minus punct> on Tuesday November 22, 2005 @05:38PM (#14095688) Homepage Journal
    Maybe it's just me, but an even bigger problem arises out of color coding the address bar: Confusion.

    Many users have significant problems when anything changes in their computer experience; my father for example. I tried moving him over to Firefox so that he could stay away from spyware et al, but he couldn't make the move because he couldn't navigate the user interface anymore. This man is no dullard either. He taught me to program when I was 8, has a PhD in (if I remember correctly) biology, pharmacology, or physics, teacheds microbiology, and is an associate dean at world-class university. For all of his smarts, he has had problems with computers ever since he was weened off of DOS and onto Windows 3.1. After many years of training he's finally to the point where he can work successfully in an evironment as long as nothing ever changes.

    Skip ahead to Windows XP service pack 2. Automatic updates are now on. He's been trained to allow the updates to happen, but only after I get a phone call asking me if they're ok. Unfortunately, updating sometimes means that I have to spend an hour or so teaching how to burn cds, how to switch between home/work networks, how to play music, etc. at regular intervals. I rue Microsoft not for their lax security (well, not just for their lax security), but for their ever present desire to "upgrade" their interfaces to make them "easier."

    At his work they upgrade computers relatively often. The day will come when he will have to call me each time he goes to a website with the "wrong" color.

    • Maybe training your father to press F1 instead of calling you might be worthwile.
    • by shis-ka-bob ( 595298 ) on Tuesday November 22, 2005 @06:11PM (#14095979)
      How can you not know what field his PhD is in? I can assure you that my kids know that mine is in Physics (and grandpa's is in Music). Pharmacology and Physics are quite seperate fields (although I guess that a French physicst is a physicien and all know that Pharmacologists and physicians work together.)

      My kids are sick and tired about hearing about my stories from grad school. There are only so many things you can do with liquid nitrogen to stave of the bordom of collecting data. They know all my rubber nail in 2x4, frozen cricket (they really do stop chirping if they are cold enough) & exploding pop bottle stories (a 2 liter plastic bottle with a few tens of milliliters of LN will completely vaporize if you put on the cap and wait for the LN to evaporate. It leaves a cloud of frozen water vapor too.) By now, you probably understand why they are sick of my stories.

    • . He taught me to program when I was 8, has a PhD in (if I remember correctly) biology, pharmacology, or physics, teacheds microbiology, and is an associate dean at world-class university. For all of his smarts, he has had problems with computers ever since he was weened off of DOS and onto Windows 3.1.

      We need a Knoppix Live CD over here! STAT!!!
      • Why, so you can enjoy the look of sheer panic on his face? Dude, have you ever booted into Knoppix? It's like a hacker's wet dream. If you want a user with a computer phobia (that's what it sounds like to me) to switch to linux, you've gotta give them something somewhat familiar. The Ubuntu LiveCD might be a better place to start.

        I say leave him with Windows, if that's what he's most comfortable in. Personally, I would do my best to lock down his machine other ways-- forget about automatic updates; u

        • well given that he sounds like he prefered command line os's over guis (he had no problem using dos), i would expect that he would be right at home in the feature rich enviroment of the avarage *nix cli.

          hell, give him w3m, links or some similar cli browser and presto...
    • The ability to react to a computer in a fluid manner is becoming a required skill. Learn to recognize a trustworthy change versus something that's going to bite your box in the butt, or get left behind. The IM generation can (often) do it, so it doesn't seem to me to be an issue of smarts per se. I don't know what it is about getting old, but people seem to lose their ability to learn. To adapt. I think that's the problem here, although even here I have to question the smarts a little if Firefox is differen
    • He doesn't call you now when he puts is bank account information into a fake banking site.

      Would you rather have him call when the location bar is a funny color, or simply never get the call until his bank account is wiped out?
  • Not new ideas. (Score:3, Informative)

    by StrawberryFrog ( 67065 ) on Tuesday November 22, 2005 @05:48PM (#14095769) Homepage Journal
    Ideas such as colour coding location bars and an anti-phishing database.

    Do they mean like in the Netcraft anti-phishing toolbar [netcraft.com]?
  • Err....four? (Score:3, Insightful)

    by Anonymous Coward on Tuesday November 22, 2005 @05:49PM (#14095775)
    OK, raise your hand if you think there's a clearly identifiable "four major web browsers." As in, when you hear the phrase "representatives of the 4 major web browsers" you know exactly which 4 are being talked about.

    OK, now how many of you had Konqueror as one of the 4?

    C'mon--I like Konqueror as much as the next user, but beyond IE and Firefox there are a large number of minor browsers out there. Mozilla, obviously, unless you lump that with Firefox as I do. Then probably Opera. And then, what, Safari? Konqueror is maybe 6th or 7th. So how "cross browser" is this?
    • Stats from the various sites I maintain suggest IE as #1, Mozilla/Firefox/etc. as #2, Safari as #3, and Opera as #4. Though I've got one site which is focused on alternative browsers where Firefox is #1, followed by IE, Opera, and Safari.
    • Re:Err....four? (Score:5, Informative)

      by Bogtha ( 906264 ) on Tuesday November 22, 2005 @06:03PM (#14095906)

      There's four major rendering engines. Trident (Internet Explorer on Windows), Gecko (Mozilla, Firefox, etc), Presto (Opera), and KHTML (Konqueror, Safari, Omniweb, etc).

      Konqueror is important because it's the original branch of the KHTML rendering engine, used in a number of browsers, throughout KDE, and sitting on the desktops of millions of Apple users as part of Safari.

      So while it's slightly inaccurate to say that Konqueror is one of the four major web browsers, what was meant, and what is actual fact, is that Konqueror's rendering engine is one of the four major rendering engines.

  • by Agelmar ( 205181 ) * on Tuesday November 22, 2005 @06:09PM (#14095965)
    I've seen a number of posts about encryption being the problem. It's not. Yes, it is possible to crack some older algorithms with distributed botnets, yes, self-signed certificates pose a problem, but no, these are not the real problems. The real problems facing users (by this I mean the problems causing financial damage to consumers and companies) come from attacking the user and his/her environment, not attacking the encryption. When was the last time you saw someone brute-forcing the decryption of a session, with the purpose of obtaining the user's information? This makes great stuff for movies where we're tyring to crack into an Evil Foreign Government or an ultra-sophisticated criminal, but in real life this is not the threat.

    The threats that browsers need to address is the fact that their *users* and their user's *environments* are being attacked. Phishing attacks don't target weak encryption protocols. Heck, most don't even bother setting up an SSL-enabled phishing site, because people don't look for encrypted sessions in general. Phishing attacks target the user by attempting to fool the person into believing that they are at the actual site. Ask yourself - would your mother know that chase-online-banking.com is not the real address for Chase's online system? (Phishing trends show that phishers are increasingly using name-based attacks, as opposed to an IP-based URL).

    As for attacking the environment, keyloggers and malware in general are exploding in popularity. Again, this is not a problem with the encryption protocols used for securing sessions, rather it's the user's environment being attacked. One must remember that browsers don't run in a vacuum - they have a user and an environment. Using 256-bit AES encryption is great, nifty, and cool, but if my mother's computer has a keylogger installed and I decide to do some e-banking while visiting for the holidays, well then I've got a problem.

    People need to re-evaluate security in the context of which these applications are run, and stop thinking that simply increasing keylength or swapping cipher algorithms will solve the problem. It won't. Our problem is that security isn't usable, it isn't intuitave, and untill we make it so we will continue to have these problems.
  • by Misagon ( 1135 ) on Tuesday November 22, 2005 @08:55PM (#14097194)
    I read a study recently that most phishing web sites don't live longer than a week...
    A database of unimportant entries is not going to do any good.
    I figure that Microsoft will have to keep a staff of around a dozen people day and night checking out each one of these flagged URLs as soon as the URLs come in, or otherwise it is not going to be very effective.
    • You're actually a bit off in your timeline, in that 'average' is really a poor [misleading] statistic to use for this. The data is extremely bimodal. For phishing sites hosted by ISPs in the U.S. that are reported on a weekday other than Friday during business hours and/or name-based attacks (registering a domain that looks like a legitimate domain), the average turnaround is around 40 hours. For phishing sites first reported and/or launched on a Friday afternoon, and hosted in China, Singapore, or certain

"How to make a million dollars: First, get a million dollars." -- Steve Martin

Working...