The Problem of Search Engines and "Sekrit" Data 411
Nos. writes: "CNet is reporting that not only Google but other search engines are finding password and credit card numbers while doing its indexing. An interesting quote from the article by Google: 'We define public as anything placed on the public Internet and not blocked to search engines in any way. The primary burden falls to the people who are incorrectly exposing this information. But at the same time, we're certainly aware of the problem, and our development team is exploring different solutions behind the scenes.'" As the article outlines, this has been a problem for a long time -- and with no easy solution in sight.
A symptom of poor programming... (Score:4, Insightful)
Re:A symptom of poor programming... (Score:5, Interesting)
"Index of
"Index of
"Index of
"Index of
"Index of
Re:A symptom of poor programming... (Score:5, Informative)
Personally I like to make sure that there is an
And don't forget "index of
Re:A symptom of poor programming... (Score:2)
This is a good reason not to let developers have administrator access to any boxen they are developing on.
Re:A symptom of poor programming... (Score:5, Interesting)
-Legion
Re:Nice work, Legion303. (Score:3, Funny)
Re:A symptom of poor programming... (Score:4, Funny)
Re:A symptom of poor programming... (Score:3)
Re:A symptom of poor programming... (Score:2, Informative)
The October 2001 issue of IEEE Computer has some articles on security, and the first article in the issue is titled "Search Engines as Security Threat" by Hernandez, Sierra, Ribagorda, Ramos.
Here's a link [computer.org] to it.
How can this happen? (Score:4, Redundant)
Given this premise, the only way that Google or another search engine could find a page with credit card numbers or other 'secret' data, would be if that page was linked to from another page, and so on, leading back to a 'public' area of some web site.
That is to say, the web-indexing bots used by search engines cannot find anything that an ordinary, very patient human could not find by randomly following links.
How this happens (Score:5, Informative)
Suppose I have a secret page, like:
http://mysite.com/cgi-bin/secret?password=admin
Suppose this page has some links on it, and someone (maybe me, maybe my manager) clicks them to go to another site (http://elsewhere.com/).
Now suppose elsewhere.com runs analog on their web logs, and posts them in a publically-accessible location. Suppose elsewhere.com's analog setup also reports the contents of the "referer" header.
Now suppose the web logs are indexed (because of this same problem, or because the logs are just linked to from their web page somewhere). Google has the link to your secret information, even though you never explicitly linked to it anywhere.
One solution is to use proper HTTP access control (as crappy as it is), or to use POST instead of GET to supply credentials (POST doesn't transfer into a URL that might be passed as a referrer). You could also use robots.txt to deny indexing of your secret stuff, though others could still find it through web logs.
Of course, I don't think credit card info should *ever* be accessible via HTTP, even if it is password protected!
Re:How this happens (Score:2, Informative)
Re:How this happens (Score:2, Troll)
Then it's a pretty crappy secret. Plaintext passwords sent via GET are weaker than the 4 bit encryption in a DVD or something.
Suppose this page has some links on it, and someone (maybe me, maybe my manager) clicks them to go to another site (http://elsewhere.com/).If the page is really truly supposed to be secret, then it won't have external links, and you'll filter it out of your web logs too. Or you could just suck.
Google doesn't kill secrets. PHBs and MCSEs kill secrets.Directory searches (Score:4, Insightful)
So if http://credit.com/ has a link to http://credit.com/signin/entry.html then these engines will also check http://credit.com/signin/ - which will, if directory indexes are on and there is no index.html page there, show all the files in the directory. In which case http://credit.com/signin/custlist.dat - your flatfile list including credit cards - gets indexed.
So if you're going to have directory indexing on (which there can be valid reasons for) you really need to create an empty index.html file as the very next step each time you set up a subdirectory, even if you only intend to link to files within it.
Oh Yeah? (Score:4, Funny)
This is very serious. Could you please post the exact search engines are query strings so I can make sure my information isn't there?
Knunov
Re:Oh Yeah? (Score:5, Funny)
By the way, does google have that realtime display of what people are searching for?
Re:Oh Yeah? (Score:2, Insightful)
I actualy tried to search for my credit card number, but only searched for 8 digits, in various forms (always the same digits, mind you), like:
"XXXX XXXX"
"XXXX-XXXX"
"XXXXXXXX"
Thanks god, nothing
This is something I sugest you people to do. I would sugest using the last 8 digits, onde the "last 4 digits" are commonly used, but you won't be exposing something that is probably already everywhere.
Re:Oh Yeah? (Score:2, Funny)
Yeah!
I just typed in my credit card number and found 15 hits on web sites involving videos of hot young goats.
Tangential Google Question (Score:5, Interesting)
If I want to find lyrics to a song, the site that has them will often be down, but the cache will still have them in there.. Why is what google is doing 'okay' but what the origional site not okay? Or do they just leave google alone?
Re:Tangential Google Question (Score:3, Interesting)
Given that they do have (for now) some sort of immunity, it opens a loophole for publishing illegal data. Simply set up your site with all of Metallica's lyrics / guitar scores (all 5 of them, heh). Submit it for indexing to Google, but don't otherwise attract attention to the site. When you see the spider hit, take it offline. Now the data is available to anyone who searches for it on Google, but you're not liable for anything. The process could be repeated to update the cache.
Re:Tangential Google Question (Score:2, Interesting)
It doesn't last (Score:3, Informative)
Because 28 days after you took your page offline it will disappear from the Google cache.
Google reindexes web pages, and if they 404 on the next visit, then good bye pork pie! You have to get them while they are hot, eg, when a site has JUST been Slashdotted.
Re:Tangential Google Question (Score:3, Informative)
> it be very low on a search list?
If it's a very specific search term, Google will still return it in the list. If it's unique enough, it's very possible that it will even be the top ranked page. If you put a unique string of characters (like a password or something) on a page, and google indexed it, typing that "password" into the search engine will give you your page.
You can also type domain names into google to retrieve the cache page for that website, which would accomplish much the same thing as long as it's not geocities or something.
Re:Tangential Google Question (Score:2)
Re:Tangential Google Question (Score:3)
how the FUCK is this possible? (Score:2, Insightful)
how can someone be so blatantly stupid as to store anything other than their web content, never mind credit card details, in their published folders? how? they redirected my documents to c:\inetpub\wwwroot\%username%\...???
Re:how the FUCK is this possible? (Score:2, Insightful)
Re:how the FUCK is this possible? (Score:2, Insightful)
In an ideal setup the machine storing credit card information wouldn't have a network card, or speak any networking protocal. You'd have a front end secure webserver. That machine would would pass the credit card information to the backend across a serial link. The backend machine would process the card and return the status. The CC data would only be a one way transfer, with no way of retrieving it back off of that machine.
Stopping Google won't stop the problem... (Score:5, Insightful)
The quote from that article about Google not thinking about this before the put it forward is idiotic. How can Google be responsible for documents that are in the public domain, that anyone can get to by typing a URL into a browser. It isn't insecure software, just dumb people...
Re:Stopping Google won't stop the problem... (Score:3, Interesting)
Re:Stopping Google won't stop the problem... (Score:2, Interesting)
Web servers could ship configured to not AutoIndex, only allow specific file types (.jpeg,
Of course, putting something in public that you don't want someone to see is just plain stupid, but apparently we need to make stupid people feel like they're allowed on the 'net.
Re:Stopping Google won't stop the problem... (Score:2)
This seems to be the most common early response to the article and I agree up to a point. The problem is where to stop. Several times I've found stuff in Google's cache that I know were password-protected on the website. I was grateful, but wondered how they retrieved them. Did they purchase a subscription? Did the owners give them access for the benefit of having the site catalogued?
Another issue appears when they start crawling directories. It's never obvious which directories were meant to be public readable and which ones weren't, but Google undoubtedly uses techniques beyond that of the casual browser. As what point do they become crackers?
A number of years ago, I had a shell account on a Unix system. It was amazing where I could go, what I could see on the system with a little bit of ingenuity. When I pointed this out to the sysadmin, he treated me like a criminal. Okay, maybe I should have stopped when I started getting warning messages
How far is too far in the search for information?
Re:Stopping Google won't stop the problem... (Score:5, Funny)
Uhh...no.
HTTP is an extremely basic protocol. Google's bots simply do a series of GET requests.
It would be possible that Google's bots have a database of username/passwords for given sites, but the more likely scenario is that they have stumbled across another way to get the "protected" information:
I ran robots for nearly 2 years and was harassed by many a Webmuster who could prove that my robots had hacked their site. They'd show me protected or secret data. It typically took 3 to 5 minutes to find the problem...usually the muster was the problem themself.
HERE'S A NOTE OF WARNING TO WEBMASTERS:
Black text links on black backgrounds in really small fonts are NOT secure.
Maybe I should get this posted to BugTraq...or would MS come after me??
Re:Stopping Google won't stop the problem... (Score:3, Insightful)
If Google accessed it via a special link, then Google would store that link, and you'd use that link, and you'd see it yourself.
(another form of not-secret link:
http://user:password@domain/path/file)
Re:Stopping Google won't stop the problem... (Score:4, Insightful)
If you run a web site on the public internet then you should be paying attention to this basic fact: If you put it out there then people have a perfect right to grab it, even if you don't specifically tell them it's there. (I know FCC rulings don't apply, but the principle is the same). You should encrypt EVERYTHING you don't want people to see.
Encryption is like your pants, it keeps people from seeing your privates. Hiding your URLs and hoping is like running realy, realy fast with no pants on - most people wont see your stuff, but there's always some bastard with a handy-cam.
Different file types make my day (Score:3, Interesting)
I'll never forget the day I first saw a .pdf in Google search result. Not that long ago I saw my first .ps.gz in a search result. I mean, how dope is that!? They're ungzipping the file, and then parsing the postscript! Soon they'll start uniso-ing images, untarring files, unrpming packages, .... You'll be able to search for text and have it found inside the README in an rpm in a Red Hat ISO.
Can't wait until images.google.com starts doing OCR on the pix they index...
Well Behaved Crawlers (Score:4, Insightful)
P.S. Anyone keeping credit card info in a web directory that's accessible to the outside world should really think long and hard about getting out of business on the internet.
Many crawlers ignore robots.txt (Score:3, Interesting)
Re:Many crawlers ignore robots.txt (Score:2)
For example, try this with wget sometime:
wget -r somesitethathasrobots.txt
su -
chown root:root robots.txt
cat
chmod 0000 robots.txt
exit
wget -r somesitethathasrobots.txt
voila, wget now thinks it's observing robots.txt, but robots.txt is a zero-length file, and it can't overwrite it because only root can write to that file...
Re:Well Behaved Crawlers (Score:2)
True. (Score:2)
The way to stop the determined snooper is to not keep your data in a directory that can be accessed by your web server.
Re:Well Behaved Crawlers (Score:5, Insightful)
You should not be using robots.txt to keep confidential data out of caches. In fact, most semi-intelligent crackers would actually download the robots.txt with the specific intention of finding ill-hidden sensitive data.
comp.risks (Score:2)
Bottom line: that standard may be intended for one behavior (robots don't look in these directories), but there's absolutely nothing to prevent it from being used to support other behaviors (robots look in these directories first). If you don't want information indexed, don't put the content on your site. Or at a minimum, don't provide directory indexes and use non-obvious directory names.
Re:comp.risks (Score:2)
It is best used to tell crawlers not to bother with pages that are simply useless to crawl. If I ran a site containing a text dictionary in one big html file, I should use robots.txt. If I had a script that just printed random words, I should disallow that too.
Google shouldn't lift a finger (Score:2, Interesting)
Re:Google shouldn't lift a finger (Score:2)
Simple but burdensome solution (Score:4, Informative)
Re:Simple but burdensome solution (Score:5, Insightful)
I don't see why Google or any other search engine has to even acknowledge this problem, it's simply Someone Else's Problem. If I was paying a web team/master/monkey any money at all and found out about this, heads would roll. It seems that even thinking of pointing a finger at google is the same tactic Microsoft is doing at those "irresponsible" individuals pointing out security flaws.
If anything Google is providing them a service by telling them about the problem.
Re:Simple but burdensome solution (Score:2)
Re:Simple but burdensome solution (Score:3, Informative)
should match >99% of cc numbers. And a lot of other dross, but you can just pipe it into a mod10 checker
Putting the burden on me, the poor sap who wants to have my web pages indexed, to make sure that I don't accidently put any numbers on a web site that might be mis-interpreted as a credit card number (i.e. a tab or comma separated list of numbers would be likely to hit the above, especially if it was much longer than a CC number).
Not to mention the problem of recursive lookup on
a long number (the first 2000 digits of pi are 3.1415926535.......) - it would take an age to make sure there were no CC no's in that.
All together, it would cause 'innocent' pages to not be indexed, which is distinctly sub optimal.
Re:Simple but burdensome solution (Score:2)
The burden of hiding information, that should *NOT* be there in the first place, should rest on the entity that posted the information publicly - the web-site.
Once information is *Published* can it be *UnPublished*?
Re:Simple but burdensome solution (Score:2)
Among other things, this would have the amusing effect of blacklisting most web pages about credit card number validation.
Google exploit patch for Apache (Score:4, Funny)
% cat > robots.txt
User-agent: *
Disallow:
^D
%
Google exploit patch 0.2 for Apache (Score:2, Funny)
% cat > /var/www/html/robots.txt /
User-agent: *
Disallow:
^D
%
Insert foot in mouth.... (Score:2, Interesting)
"Webmasters should know how to protect their files before they even start writing a Web site," wrote James Reno, chief executive of Amelia, Ohio-based ByteHosting Internet Services. "Standard Apache Password Protection handles most of the search engine problems--search engines can't crack it. Pretty much all that it does is use standard HTTP/1.0 Basic Authentication and checks the username based on the password stored in a MySQL Database."
And chief executives of a hosting company should know how Basic Authentication works before hosting web sites...
Crewd
Basic Authentication (Score:3, Insightful)
... and know that it's a wholly inadequate way of "protecting" credit card numbers!
Re:Insert foot in mouth.... (Score:2, Funny)
The Problem of Search Engines and "Sekrit" Data (Score:4, Funny)
The Problem Incompetent System Administrators
If data is 'sekrit'/sensitive/confidential - don't put it on the web. It's as simple as that. If that data is available on the web, search engines can't be blamed for finding it.
This is what happens when you use frontpage... (Score:5, Informative)
The search engines cannot feasibly stop this from happening, each occurance is unique unto itself. The only prevention tool is knowledge and education, and bringing to the masses a general understanding of search engine spidering theory.
Just my 2 cents.
Re:This is what happens when you use frontpage... (Score:2)
Re:This is what happens when you use frontpage... (Score:2, Insightful)
"The guys at Google thought, 'How cool that we can offer this to our users' without thinking about security. If you want to do this right, you have to think about security from the beginning and have a very solid approach to software design and software development that is based on what bad guys might possibly do to cause your program grief."
This is crazy. Google isn't doing anything wrong. The problem is with the idiots who don't spend five minutes to check that their secret data is really hidden.
This is like blaming a dog owner when his dog bites a burgler... er uh, nevermind.
Example (Score:5, Informative)
A couple of months down the line a couple of search engines, when asked about 'mycompanyname' were giving the newsletter entry in the top 5.
Alongside my details were those of several other companies. Essentially laying out the essence of the respective business plans.
How did this happen? The site was put together with FP2000, and the 'secure' area was simply those files in the
I had no cause to view the website prior to this. The site has been fixed on my advice. How did this come about? No one in the organisation knew what security meant. They were told that
It didn't do any damage to myself, but a few of the other companies could have suffered if their plans were found. Its not googles job to do anything about this, its the webmasters. But a word of warning - before you agree for your info to appear on a website ask about the security measures. They mey well be crap!
I've got a solution! (Score:5, Funny)
Brilliant, huh? ;-)
On second thought, maybe I shouldn't post this... some PHB might actually think it's a good idea.
Bad.. but (Score:2)
The real issue is not if you can.. but if you actually do use the information. Regardless of if it is available or not, it IS ILLEGAL. (Carding does give rather long prison times as well)
People had the chance to steel from other people for as long as mankind existed. This is just another form... perhaps a bit simpler though
robots.txt (Score:2, Interesting)
From my web logs, I see that a lot of HTTP bots don't care crap about /robots.txt. Another thing which happens is that they read robots.txt only once and cache it forever in the lifetime of accessing that site, and do not use a newer robots.txt when it's available. It'd be useful to update what a bot knows of a site's /robots.txt from time to time.
HTTP bot writers should adhere to using information in /robots.txt and restricting their access accordingly. In a lot of occasions, webmasters may setup /robots.txt to actually help stop bots from feeding on junk information which they don't require.. or things which change regularly and need not be recorded.
Oh, for regular expression searching in Google (Score:5, Funny)
(Not, of course that I'd ever do anything like that...)
Searching with regular expressions would be cool, though...
Directory listings (Score:2, Informative)
Must... blame... someone.... (Score:3, Funny)
Why are stupid people not to blame for anything anymore?
Business Model (Score:5, Funny)
A while back there was a thread here about the weakness of the revenue model for search engines. Maybe we have found the answer, think about all the revenue that Google could generate with this data!
Anybody knows when Google is going public?
well golly gosh, it works! (Score:2, Informative)
My first hit is:
www.nomi.navy.mil/TriMEP/TriMEPUserGuide/WordDo
at the bottom of the html:
UserName: TURBO and PassWord: turbo, will give you unlimited user access (passwords are case sensitive).
Username: ADMIN and PassWord: admin, will give you password and system access (passwords are case sensitive).
It is recommend that the user go to Tools, System Defaults first and change the Facility UIC to Your facility UIC.
oh dear, am I now a terrorist?
Bring out the legal eagles (Score:4, Insightful)
That quote sums up the exact problem. It's not googles fault for finding out what an idiot the web merchant was. As a matter of fact I thank google for exposing this problem. This is nothing short of gross negligence on the part of any web merchant to have any credit card numbers publicly accessible in any way. There is no reason this kind of information should not be under strong security.
To have a search engine discover this kind of information is dispicable, unprofessional, and just plain idiotic. As others have mentioned these guys need to get a firewall, use some security, and quit being such incredible fools with such valuable information. Any merchant who exposes credit card information through the stupidity of word documents, or excel spreadsheets on their public web server, or any non-secure server of any kind deserves to get sued into oblivion. Although, people usually don't like lawyers I'm really glad we have them in the US because they help stop this kind of stuff. Too many lazy people don't think its in their best interest to protect the identity, or financial security of others. I'm glad lawyers are here to show them the light
JOhn
No easy solution in sight? (Score:2)
Allow me to disagree. This fellow apparantly agrees with Microsoft that people shouldn't publish code exploits and weaknesses. Sorry, but anyone who had secret information available to the external web is in the exact same boat as someone who has an unpatched IIS server or is running SQL Server without a password.
Let's assume that Google had (a) somehow figured out from day one that people would search for passwords, credit card numbers, etc, and (b) figured out some way to recognize such data to help keep it secret. Should they have publisized this fact or kept it a secret? Publicity would just mean that every script kiddie would be out writting their own search engines, looking for the things that Google et al were avoiding. Secrecy would mean that a very few black hats would write their own search engines, and the victims of such searches would have no idea how their secrets were being compromised.
But this assumes that there's someway of accomplishing item (B), which I claim is very difficult indeed. In fact, it would be harder to accomplish than natural language recognition. Think about it... Secrets are frequently obscure, to the point that to a computer they look like noise. Most credit cards, for example, use 16 digit numbers. Should Google not index any page containing a string of 16 consecutive digits? How about pages that contain SQL code? How would one not index those, but still index the on-line tutorials at MySQL, Oracle, etc?
The only "solution" is to recognize that this problem belongs in the lap of the web site's owner, and the search engine(s) have no fundamental responsibilty.
And please close the door on the way out.... (Score:2)
Am I the only one scared by this? The problem is googles, simply because they follow links? I find it hard to believe this stuff sometimes!
<rant>When will people learn that criminals don't behave? That is what makes them criminals!</rant>
As our second year uni project we were required to write a web index bot. Guess what? It didn't "behave". It would search through a robots.txt roadblock. It would find whatever their was there to find. This stuff is so far from being rocket science it is ridiculous!
Sure, using Google might ease a tiny fraction of the bad guys work, but if Google wasn't there, the bad guys tools would be. In fact, they still are there.
Saying that you have to write your client software to work around server/administrator flaws is like putting a "do not enter" sign on a tent. Sure, it will stop some people, but the others will just come in anyway, probably even more so just to find out what you are hiding.
Sure enough. (Score:3, Interesting)
At any rate--scary it is.
Don't know that this is Google's problem.. (Score:2)
"The guys at Google thought, 'How cool that we can offer this to our users' without thinking about security. If you want to do this right, you have to think about security from the beginning and have a very solid approach to software design and software development that is based on what bad guys might possibly do to cause your program grief."
Search and replace "Google" with "Microsoft". The lack of security is in the operating system and the applications which launch the malicious files without warning the user. Google just tell you where to get 'em, not what to do with 'em.
Web Sites are public by definition (Score:4, Insightful)
Secondly, it appears that companies are storing credit card numbers (a) in the clear and (b) in these public areas. These companies should not be allowed to trade on the internet! That is so inept when learning how to use pgp/gpg takes no time at all, and simply storing the PGP encrypted files outside the publically accessible filesystem is just changing the line of code that writes to "payments/ordernumber.asc" to "~/payments/ordernumber.asc" (or whatever). Of course, the PGP secret key is not stored on a publically accessible computer at all.
But I shouldn't be giving a basic course on how to secure website payments, etc, to you lot - you know it or could work it out (or a similar method) pretty quickly. It is those dumb administrators that don't have a clue about security that are to blame (or their PHB).
Disagree With Gary McGraw (Score:4, Insightful)
standing naked in front of the window (Score:3, Interesting)
this guy's just looking for free hype for his book. if that's the kind of advice he offers, he's doing more harm than good.
Re:standing naked in front of the window (Score:2)
>
When asked to clarify furthur, Gary said, "Uh
Hint, Hint. (Score:2, Insightful)
Agreed. Such lax security via the use of Frontpage, IIS,
You might as well do and impression of Duncan in the movie Shrek "Ooo! Ooo! pick me! pick me!"
Webmasters queried about the search engine problem said precautions against overzealous search bots are of fundamental concern.
Uhh...they are "bots"...they don't think, they do.
Does the bot say "Oh, look, these guys did something stupid...let's tell them about it."
No, they search, they index and they generate reports.
I've seen this problem crop up before when a coworker was looking for something totally unrelated on google.
Sad part was it was an ISP I had respect for, despite moving from them to broadband.
What killed my respect was at the very top of the pages was "Generated by Frontpage Express"...gack!
I don't recall if it was a user account or one of their admin accounts...but for modem access I kind of stopped recommending them, or pointed out my observations.
I have to parrot, and agree, with the "Human Error" but add "Computer accelerated and amplified".
It happens, but that does not mean we have to like it, much less let it keep happening.
fun (Score:2)
Of course, there's always good fun going into the
Of cousre, there's the old fashioned way. If you see an image at http://www.whatever.com/boobie3.jpg, chances are there's a boobie1 and boobie2.jpg.
Totally unrelated to Google (Score:2)
Why are they blaming the search engines? (Score:2)
Why?!
Google is finding documents that any web browser could find. The fault belongs to the idiots who publicly posted sensitive documents in the first place. Why doesn't the article mention this anywhere? Garbage reporting if I've ever seen it.
Blaming Google for this... (Score:3, Funny)
Checklist for HTTP Distribution of Sensitive Data (Score:3, Informative)
Second, if the sensitive information is going to a select few people, consider PGP encrypting the data, and only putting the encrypted version online. Doing this makes many of the HTTP security issues less critical.
Assuming you still have to put something sensitive online, make sure of the following:
For those who must use IIS (Score:3, Informative)
"Don't use IIS."
This just isn't an option for a lot of people. I would change this to:
"If you use IIS, you need to make sure you check BugTraq/cert EVERY day."
I would also add:
"If you use IIS with COM components via ASP, make sure the DLL's are not in a publicly accessible directory."
This happens a lot, and makes DLL's lots easier to break.
MicroSoft Passport Credit Card # avaliable (Score:2, Interesting)
script for extracting credit card numbers from
the Passport database. Scary. Dont buy anything
through it until they fix it.
Re:MicroSoft Passport Credit Card # avaliable (Score:2, Informative)
Are crawlers only using links? (Score:2)
Blissful ignorance backfires again. (Score:3, Interesting)
Google's comment was:
"The primary burden falls to the people who are incorrectly exposing this information."
This is where they should have stopped. Those who find their credit card information in a search engine will learn a lesson and use services that actually take care of their customers' security and privacy. Google shouldn't have to clean up incompetent people's mess.
In the long run, these things can only lead to the ignorant (wannabe?) players in the market slowly dying because they don't know what they are doing.
I personally hope someone gets a taste of reality here, and that only the serious players survive. The MCSE crowd may finally learn that there's more to it than blind trust in their own (lacking) ability.
Gary McGraw, super-genius. ;) (Score:2)
*blinks*
Well, actually, Gary, it seems to me that it isn't Google that's been caused any grief here, but, those wembasters who didn't "think about security from the beginning." In fact, it looks like Google runs a pretty tight ship.
This is the kind of guy who blames incidents.org for his web server getting hacked. After all, they weren't thinking about security from the beginning, were they?
Riight.
BRx
Password search (Score:3, Interesting)
filetype:htpasswd htpasswd
Scary how many
-- Azaroth
An advertisement for publicfile (Score:3, Informative)
Perhaps it would be a good idea after reading this article to examine publicfile [cr.yp.to].
It was written by a very security conscious programmer who realises that your private files can easily get out onto the web. That is why publicfile has no concept of content protection (eg, Deny from evilh4x0r.com or .htaccess) and will only serve up files that are publically readable.
From the features page:
A good healthy does of paranoia would do people good.
Print out this article !! (Score:3)
No, seriously, do it !
Print it out and hand it on the wall, then put a post-it note on top of it saying : "The best example of 'blaiming the messenger' ever !!!"
Where is YOUR speech license? (Score:2)
Youre a nitwit. Im revoking your speech licnese on slashdot.
Re:To test your credit-card ordering site... (Score:2)
Then watch the fraudulent charges fly when the person who was sniffing cleartext HTTP traffic gets it in his logs.
-Legion
Re:Nothing to do (Score:2)
I'm at a loss to explain how someone puts sensitive information on the web in an unprotected location and then points the finger at google because they made it easier to find.
"We have a problem, and that is that people don't design software to behave itself," said Gary McGraw, chief technology officer of software risk-management company Cigital, and author of a new book on writing secure software.
"The guys at Google thought, 'How cool that we can offer this to our users' without thinking about security. If you want to do this right, you have to think about security from the beginning and have a very solid approach to software design and software development that is based on what bad guys might possibly do to cause your program grief."
funny? (Score:2)
they are talking about sensitive personal information - just don't store this online.
if you really need to access something (that isn't a credit card number... just don't do that!) and don't have physical access to the box, try SSH or at least make sure it's a secure directory (httpS://blah/mystuff...)
Re:regular expressions to the rescue (Score:2)
i'm sure google knows of a dozen ways they can do this, but why should they? it isn't prohibitively hard to write a spider, and with a 160GB HD for $300, someone with not-so-pure motives and the equivalent of an undergrad education in CS could write one, send it out (ignoring robots.txt), do the reverse of that regex search to sniff out cc#'s online, and create a database full of beer money.
ie, (as has been mentioned n+1 times already) Google changing their behavior does nothing to fix the underlying problem of sysadmins that are undertrained and/or irresponsible.
Re:Easy solution (Score:2)
However, my intention was simply to remove Google's legal implication of storing credit card numbers that were not willingly given by the cardholder. They could also autonomously send an email to webmaster@offendingsite.com notifying them of the potentially vulnerable link, entirely from the kindness of their hearts. But, legal issues in the past have shown that this would result in a cease-and-desist and a lawsuit against Google claiming that the crawler/spider has been hacking their website.
Judging from the past, from a legal standpoint, the best thing they can do is simply filter their cached content. If you are worried that people are going to search for ################, then disallow searching for something so erroneous. Or simply change the mask from all # to random special characters.
It's really not that difficult of a solution. Yes, it's a little disturbing that some websites are this easily hacked, but are we really all that surprised? Get into the low-end ecommerce business sometime. You'll be surprised (frightened even) with what some people have been using for their online stores.