Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet Data Storage IT News

Looking for Answers in the Age of Search 95

prostoalex writes "James Fallows, in a New York Times article, notices that search engines are getting pretty good at providing information for simple keyword-based queries. However, when it comes to the actual information, such as finding the necessary data and statistics, they're not doing a great job. The article talks about the NSA- and CIA-sponsored Aquaint project that aims to deliver answers to questions that might be expressed with a variety of keywords, and need to be 'understood' by the search engine before providing the answer."
This discussion has been archived. No new comments can be posted.

Looking for Answers in the Age of Search

Comments Filter:
  • by rah1420 ( 234198 ) <rah1420@gmail.com> on Sunday June 12, 2005 @07:02PM (#12797958)
    ...like the Semantic Web? [semanticweb.org]

    No, I don't know why it's being relaunched. My guess is that it's probably one of the answers that we are looking for in the age of search that didn't quite cut it. But isn't that what all these different meta-searches are talking about? The ability to get semantic meaning imbued into the web?
  • Homunculus (Score:3, Interesting)

    by headkase ( 533448 ) on Sunday June 12, 2005 @07:13PM (#12798016)
    I know where I'd like to see this first: A digital librarian for Wikipedia. An agent that would recommened articles based on your preferences and maybe store the articles in some language neutral format where articles could be expressed into a target language or parsed from a language into neutral format. Too bad nobodies publicly demonstrated anything close to the level of machine intelligence that would be required to do it.
  • by neil.pearce ( 53830 ) on Sunday June 12, 2005 @07:51PM (#12798206) Homepage
    I certainly know the problem you've describing.

    Solution for me was to download the Firexfox "CustomizeGoogle" extension [mozilla.org].
    Once installed, the last tab allows you to enter regular expressions of sites to completely remove from displayed search results.

    A little bit of config later and its goodbye "about.com", "go.com", "experts-exchange.com" and all the other similar "nothing to see here (unless you give us money)" sites.
  • and this? (Score:1, Interesting)

    by Anonymous Coward on Sunday June 12, 2005 @07:58PM (#12798248)
    http://mindset.research.yahoo.com/ [yahoo.com]

    seems to be a good crap filter
  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Sunday June 12, 2005 @08:03PM (#12798276)
    Comment removed based on user account deletion
  • Here's what I do: (Score:2, Interesting)

    by Hosiah ( 849792 ) on Sunday June 12, 2005 @09:17PM (#12798761)
    As with *all* things technical when they get too popular for their own good, I actually find Google's hit-quality to be going down-hill. Now, I use multiple engines, but they're all unifying to one standard, which will make them all mediocre in the future.

    If you run Linux, you have a decent tool-kit on hand to enhance search engine performance. Use lynx from the command line, with either the -source or -dump option, and pipe it through sed and such to filter it however you like. A recursive check of each link from the main page should get you most of the results, but you'd have to alter the formulae depending on which engine you use.

    You could even put a Python script somewhere in the pipeline, which could sort the resulting links and keywords into a dictionary data structure, useful to save as a pickled object which can be recalled at leisure. Heck, once you have a source file in text form on your desktop, you have all those text tools to fiddle with. I'm sure others can come up with 100 more ideas.

    In point of fact, the only thing requiring me to use search engines at all is the question, given that it would be simple enough to have a bot crawl the web for me while I sleep, of where would I *put* the data? But for small, specific applications, this manner could even work, and it could generate a list of links as bookmarks for you to try in the morning.

    On the whole, I prefer that search engines *not* try to read my mind, because too often in the tech age, reading my mind changes to "making my mind up for me". I favor broad results which I can narrow in batch scripts, vs pre-narrowed results that reflect some corporate IDIOT's idea of what I'm supposed to find, but which will inevitably make what I want unobtainable.

  • by Metasquares ( 555685 ) <slashdot.metasquared@com> on Sunday June 12, 2005 @11:58PM (#12799865) Homepage
    But right now, even seasoned web developers/programmers don't want to go near it because of things like OWL (it sure took me a long time to figure it out, and I have a background in the logic and technology that it's built on). The first step towards making the Semantic Web (which is really a great idea) usable is to make creating a semantic webpage easier. You can't just say "put up with it now because it will get easier later" - that's not how to get widespread adoption.

Kleeneness is next to Godelness.

Working...