Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming

Sort Linked Lists 10X Faster Than MergeSort 326

virusfree tells us about a new algorithm that has been developed that the author claims can sort a linked list up to 10 times faster than MergeSort. "BitFast," a member of the Hash algorithms family, is available in C and C++ under the GPL.
This discussion has been archived. No new comments can be posted.

Sort Linked Lists 10X Faster Than MergeSort

Comments Filter:
  • by Anonymous Coward on Sunday February 25, 2007 @02:49PM (#18145112)
  • It's radix sort. (Score:5, Informative)

    by Nick Mathewson ( 11078 ) on Sunday February 25, 2007 @02:49PM (#18145114)
    From the page:

    4. Logic of BitFast BitFast works with numbers in the bit level. It applies masks to cut the numbers so it uses only the desired ( based on the programmer/problem specifications ) number of bits every run. It always start with the LSB ( Less Significant Bits ) ignoring the any other bits and every run it moves to the higher block of bits until the whole list is sorted. So is we want to sort 8 bit every run , the first timer it will sort 0-7 bits ( Least significant ), then the 8-15 bits , then the 16-23 bits and for last the 24-32 bits ( Most significant ) , having now sorted the entire list of 32bit numbers
    Congratulations, you have reinvented radix sort [wikipedia.org].
  • Nice, but... (Score:2, Informative)

    by i kan reed ( 749298 )
    It's still in the N log(N) time class. Mergesort has minimal time overhead to prepare and only N memory overhead. Do we have that for this algorithm?
    • Re:Nice, but... (Score:5, Informative)

      by dlthomas ( 762960 ) on Sunday February 25, 2007 @03:30PM (#18145468)
      I guess it technically *is* O(N * log(N)) to the number of items, but this is misleading. It's actually O(N).

      As pretty much everyone has pointed out, it's just radix sort. The time taken by radix sort scales linearly to the number of keys, and with the log of the maximum key that can be held by the container.

      If we're dealing entirely with unique keys, this is of course >= log(N), by the pigeonhole principle and all that. If we may have duplicate keys, however, there may be more keys than container space, and radix *does* scale linearly to the keys.

      Time to prepare and overhead of this alrogrithm are negligable in any context large enough for us to care about the O() of the algorithms. For some things, this *is* better than mergesort. I'm just not sure why it was posted, as it's also not new - it was invented over 50 years ago.
      • by Xaroth ( 67516 )
        I'm just not sure why it was posted, as it's also not new - it was invented over 50 years ago.

        That's /. for ya.
    • Nope. This is a reinvention of Radix Sort, which is O(n).
  • by iamacat ( 583406 ) on Sunday February 25, 2007 @02:54PM (#18145148)
    Nice, but you could just copy the list to array and do counting/radix sort. Does nothing for strings, floating point values, structures...
  • by Peter Desnoyers ( 11115 ) on Sunday February 25, 2007 @02:55PM (#18145158) Homepage
    It's a bit of an apples vs. oranges comparison to put this up against mergesort - mergesort is a comparison-based sort, while Papadopoulos' bitfast is a radix sort and thus O(N*W) where N is the number of elements and W is the width of each element in bits. (hint - try sorting 1000-byte strings with bitfast, and see which is fastest) And no, it doesn't have anything to do with hashing.

    However, it's definitely a clever way of implementing radix sort with linked lists, which may make it useful in some cases (e.g. OS internals) where you don't want to allocate space for a big directly-addressable array.
    • by Ignorant Aardvark ( 632408 ) <cydeweys.gmail@com> on Sunday February 25, 2007 @03:20PM (#18145388) Homepage Journal
      I ditto this. Mergesort is a true comparison sort, which means it can sort anything so long as a comparison between two values is defined, e.g. strings (in lexigraphical order), floating point numbers, 1000-digit numbers ... you get the drift. BitFast is a radix sort and only works in cases where the list elements obey certain limitations: for instance, if each list element can be expressed in 32 bits (an int).
    • by Wavicle ( 181176 ) on Sunday February 25, 2007 @04:30PM (#18145946)
      Actually it is much worse than you say.

      I don't know what possessed me to look at his code, but he really hasn't shown what he claims to have shown. It isn't clear to me that his code is doing what he says, but what IS clear to me is that his comparison to mergesort is very unfair. He has a TRULY AWFUL implementation of merge sort. For example, every time he goes to split a sub-list in half, he does a linear search from the current node to the end the sub-list. Having determined this value he then does ANOTHER LINEAR SEARCH to find the half-way point.

      So he has basically made an O(N^2) time complexity process just to divide the list for the merge sort. This is inside his n*log(n) merge sort. So he has an O(n^3) time complexity mergesort and trumps up how fast his modified radix sort is. Come on! Bubble sort would have beat his mergesort.

      His cleverness gets the better of him when it comes to his modified radix sort. For example, he creates two arrays (on the stack) of 65535 elements; apparently unaware that this creates an array with indexes 0..65534.
      • by Tim Browse ( 9263 ) on Sunday February 25, 2007 @05:30PM (#18146454)

        I don't know what possessed me to look at his code

        Damn you! You made me look at his code! The goggles, they do nothing.

        His cleverness gets the better of him when it comes to his modified radix sort. For example, he creates two arrays (on the stack) of 65535 elements; apparently unaware that this creates an array with indexes 0..65534.

        I can't help feeling he should have declared Ptrs and Tails as Node* arrays, and bypassed all that random casting to longs. Not sure what's going on there. But then anyone who thinks the roundabout way he used of accessing the top 16 bits of a 32-bit memory value is 'cool' is definitely on my list of people most likely to re-invent the radix sort badly.

        At first, I thought it wasn't a stable sort, but looking further into it, that's because he mixed up the 'head' and 'tail' descriptions in the explanation (or possibly in the code).

        It is amusing that some /. posters think that doing this in-place is somehow an amazing leap of insight. Sometimes /. is like reading thedailywtf.com - you see something dumb as the main story, and then find half the people commenting on it have even less of a clue.

        I must be new here.

        • by Wavicle ( 181176 ) on Sunday February 25, 2007 @06:40PM (#18146938)
          But then anyone who thinks the roundabout way he used of accessing the top 16 bits of a 32-bit memory value is 'cool' is definitely on my list of people most likely to re-invent the radix sort badly.

          It is clearly a slow day for me. I have gone one step further and ported his code over to Linux (only required a change to GetTickCount) and attempted to run it. His "cool" way of using a pointer hack has caused his code to dump core. The problem is his hack being a signed short int pointer.
          Program received signal SIGSEGV, Segmentation fault.
          0x08048752 in BitFast (TailNode=0x804a018, run=0) at bitfast.h:37
                              if (Tail[val]) { //if not 0
          (gdb) p val
          $1 = -22087

          So I try putting unsigned on ptrHack.
          Program received signal SIGSEGV, Segmentation fault.
          0x08048772 in BitFast (TailNode=0x804a018, run=0) at bitfast.h:39
                              tmpN->nextNode = pN;
          (gdb) p tmpN
          $1 = (Node *) 0xffff

          Yeah. Not so sure that casting is what he really wants. I'm not entirely clear how it is his code will work. I really wanted to clean up his mergesort and re-run it to see how valid his claim is, but it looks hopeless. I'll work on it a little more, but since the problem seems to be in his algorithm, it doesn't look real hopeful.
          • Re: (Score:3, Interesting)

            by kigrwik ( 462930 )
            Disclaimer: I didn't read the code, and don't intend to.

            I don't know if you're still interested, and I'm just hypothesizing here, but 0xFFFF is 65535 and some other posters noted that his array was of size 65535, which of course means that index 65535 is out of bounds by just one. This could cause your segfault.

            HTH,
            Kig.

            • Re: (Score:3, Informative)

              by Wavicle ( 181176 )
              That was indeed the problem. I did eventually get the thing running and, interestingly, despite using many published mergesort algorithms for sorting linked lists, the bitsort algorithm generally ran a constant 3-4x faster. So it appears that I was wrong about his mergesort having an asymptotic characteristic of O(n^3) and improving the efficiency of his mergesort gave only incremental performance improvement.

              Unfortunately this followup will forever be buried below the mod threshhold but it appears that des
        • by jinxidoru ( 743428 ) on Sunday February 25, 2007 @10:27PM (#18148692) Homepage
          The previous few posts caused me to also open up the code. Wow! What a ride! I decided that I wanted to add a few things I found as well.

          I loved that he referred to his use of the short pointer as a "cool pointer hack to avoid shifts and ifs!!!" To begin with, if we have to call that a "cool hack" then the requirements for coolness have definitely dropped a bit. I am also confused as to how this helps avoid shifts and ifs. As far as I can tell, it avoids one shift. I'd love to see the code without the "cool hack" because I'm intrigued as to how he uses an if statement to remove the higher bits.

          As stated before, I am also fond of his use of an array of size 65535 rather than 65536. I surprised he didn't run into any segmentation faults. I know that I can give him a test case that will seg fault pretty easily.

          I also question whether he understands the purpose of a header file, as the entire source for his BitSort is contained within a header file. I guess it would sort of make sense if the function was declared inline, but it isn't.

          Another fun element is how non-modular his code is. Everything is hard-coded, even though it would have been easy to declare a few constants or even make the function a template.

          Lastly though, why the crap did this article get through the editors. It is, as has been stated over and over, a radix sort and nothing different. I know that he claims it is different because it is in-place, but that is such an obvious simple change that it does not warrant a whole new algorithm. Regardless of the content, the article is so poorly written with some of the worst grammar I have read in quite some time. Slashdot really needs to improve its article standards.
      • by adrianmonk ( 890071 ) on Sunday February 25, 2007 @10:01PM (#18148554)

        He has a TRULY AWFUL implementation of merge sort. For example, every time he goes to split a sub-list in half, he does a linear search from the current node to the end the sub-list. Having determined this value he then does ANOTHER LINEAR SEARCH to find the half-way point.

        So he has basically made an O(N^2) time complexity process just to divide the list for the merge sort. This is inside his n*log(n) merge sort. So he has an O(n^3) time complexity mergesort and trumps up how fast his modified radix sort is. Come on! Bubble sort would have beat his mergesort.

        That is maybe a bad implementation, but it actually does not make it O(n^3). In fact, it doesn't make it as bad as even O(n^2). Here's why:

        The first thing I want to establish is that linearly iterating halfway through the sublist a second time (to find the middle of it) has no effect on the asymptotic analysis. It just means going through the sublist 3/2 times, which is still a constant.

        The second thing is this: as you go deeper and deeper into the recursion, the size of the sublist is repeatedly cut in half. Thus, at every level as you go deeper into the recursion you are doing twice as many loops as the previous level (one loop for the first level, when the sublist equals the original list; two loops for the second, when you are breaking halves into quarters; four loops in the third level of recursion; and so on). BUT, although you are doing more loops, the size of the sublist is always half what it was at the previous level of recursion. So have you twice as many loops but half as many loop iterations.

        That means at every level of recursion (think of this size of the call stack if that helps in visualizing it), you are looping over the entire list once (well actually 3/2 times). And there are O(log n) levels of recursion, because every time you recurse, you are working on a list half as big as before.

        The result is that the total extra added time of this unnecessary linear traversal is O(n * log n) for the entire algorithm. Since mergesort is already O(n * log n), the extra looping to find the middle and end of the list has no effect on the asymptotic behavior of the algorithm.

  • wtf? seriously. (Score:5, Informative)

    by tomstdenis ( 446163 ) <<moc.liamg> <ta> <sinedtsmot>> on Sunday February 25, 2007 @02:55PM (#18145164) Homepage
    First, it's just a radix sort [well that's my take]. Second there are too many google ads on that page. Third, merge sort is O(n log n) time, hard to beat for random data. Fourth, most people use variations of the QUICK SORT not merge sort.

    Tom
    • Re:wtf? seriously. (Score:5, Insightful)

      by slamb ( 119285 ) * on Sunday February 25, 2007 @03:07PM (#18145266) Homepage
      Fifth, most people don't sort link lists.

      Sixth, the headline "10X faster" is incorrect, as they differ asymptotically, not by a constant factor. (Run different data sets...vary by size of a single element, and by number of elements in the list. The ratio will change.)

      • Re: (Score:3, Insightful)

        by vidarh ( 309115 )
        Actually sorting linked lists is useful in many cases, and can be done fairly efficiently with a well written quicksort (without the step of converting to array that this code does), since quicksort is conceptually just recursive partition around a pivot element - partitioning a linked list is cheap and can sometimes be faster than using an array if the objects being sorted are complex and stored by value (i.e. in any case where moving the linked list pointers around would be faster than copying the objects
    • Re:wtf? seriously. (Score:5, Insightful)

      by phazer ( 9089 ) on Sunday February 25, 2007 @03:10PM (#18145300)
      Fifth, the author tries to "GPL" the algorithm, which is utter nonsense. GPL deals with copyright, so the most he can do is GPL his implementation of his bucket-/radix sort. Anyone is free to re-implement the algorithm, GPL or not.

      It would require a software patent to restrict the use of the algorithm to GPL programs.

      (And sixth, a quick look in a text book would have clued the author in)
    • Re:wtf? seriously. (Score:4, Informative)

      by Anonymous Brave Guy ( 457657 ) on Sunday February 25, 2007 @03:19PM (#18145380)

      Third, merge sort is O(n log n) time, hard to beat for random data.

      Provably impossible, in fact.

      Fourth, most people use variations of the QUICK SORT not merge sort.

      I'm not sure that's a fair generalisation. There are several good O(n log n) sorting algorithms, heap sort being another obvious one, and typically things like the data structures involved, need for stability, or ease of parallelisation are the deciding factors in practice. In fact some of the best performing algorithms in real terms are theoretically O(n^2) in the worst case, hence the creation of hybrids like introsort, a quicksort variant designed to avoid the worst case by switching to heap sort after a certain recursion depth is reached.

      Anyway, I'm afraid this is another article by a 21-year-old student who is well-meaning, but not nearly experienced enough yet to appreciate that his suggestion is just rehashing (no pun intended) a lot of long-established ideas. I think the giveaway sign is that his CV on the site still lists just about every programming language and web development skill in existence. :-)

      • Re: (Score:3, Insightful)

        by TheRaven64 ( 641858 )

        Third, merge sort is O(n log n) time, hard to beat for random data.

        Provably impossible, in fact.

        Not quite. O(n log(n)) is only the best case for comparison-based sorting. That is, if you have a less than (or greater than) predicate, you can build an algorithm that is O(n log(n)) using it. If your algorithm is not comparison-based, you can beat O(n log(n)). The most obvious example is bucket-sort, where you have k possible values. You pass over your list once, placing the value in the correct bucket, which gives you O(n) performance, at the cost of requiring k buckets worth of storage space (you

      • Re: (Score:3, Interesting)

        > the best performing algorithms in real terms are theoretically O(n^2)

        Agreed. For example, in a microcode example at work, the cost of searching for a given string was totally dominated by memory access time, so it really didn't matter which algorithm we used. So we used brute force. Pretty easy to debug, as well.
    • First, it's just a radix sort [well that's my take]. Second there are too many google ads on that page. Third, merge sort is O(n log n) time, hard to beat for random data. Fourth, most people use variations of the QUICK SORT not merge sort.

      For comparison searches, O(n log n) is not just hard to beat it is IMPOSSIBLE to beat.
      See http://www.ics.uci.edu/~eppstein/161/960116.html [uci.edu].

      Of course, radix sort is not a comparison sort, but has its own limitations as others have noted.

      And Quicksort technically has a O(n^
      • Re: (Score:3, Informative)

        by Dwedit ( 232252 )
        Merge sort only requires extra memory to store a list of indexes or pointers, not extra memory for the full data. If your data is integers, then it's the same size. If your data is strings, then the array of indexes is smaller.
        As a plus, merge sort requires no recursion, which may be helpful if your stack is really tiny, like 200 bytes large.
    • Re: (Score:3, Informative)

      by dkf ( 304284 )
      The kings of the sorting hill are quicksort, heapsort, mergesort, and insertion sort. For real data, quicksort (especially with a good pivot selector sub-algorithm) and heapsort are fastest (heapsort is also very frugal with the stack). Mergesort is a bit slower (but still O(n log n) with reasonable factors) and has the major bonus of being stable, so that elements that are equal to each other are not reordered (and that's very useful indeed with real data). Insertion sort (also stable) is the only O(n2) al
  • It's useless to talk about the speed of a sort algorithm without discussing its complexity.

    Linear improvements are completely uninteresting. Take algorithm X in Perl, then a version in optimized assembly, and there you go, a 10X improvement without having done anything groundbreaking.

    A 10X faster bubble sort is just as useless for large enough values of n, and a 10x slower merge sort would still have a very good performance in comparison.
  • Nice try, b ut... (Score:5, Informative)

    by Ancient_Hacker ( 751168 ) on Sunday February 25, 2007 @02:57PM (#18145186)
    I think you may have re-invented the "radix sort".

    It was used by 80-col card sorters, since circa 1925.

    See Knuth, Volume 3

    • by RevMike ( 632002 )

      I think you may have re-invented the "radix sort".
      It was used by 80-col card sorters, since circa 1925.

      I was thinking the same thing. One of my coworkers told me about using those beasts in the 1950s while serving in the Army.
      • by DrPepper ( 23664 )
        Hi,

        Java Blog for Experienced Programmers - http://www.0xcafebabe.com/ [0xcafebabe.com] [0xcafebabe.com]

        You might like to know that the above domain (which I assume is yours from the DNS registration) seems to have been parked.
    • Ah, the book of gods and the cure for insomnia. I have read that book 3, 4 times. The best I can master is 4 pages before falling asleep! It was a of great help when I was building an index method for Self Balancing Multiway Tree, with key compression and redundant data removal for the IBM S/34 and S/36.
  • Since the article has nothing, I wanted to ask: has anyone implemented merge sort where you merge from both the front and back using two simultaneous threads ? I haven't seen this anywhere, but it's an obvious improvement for multicore machines. Does anyone know how to further parallelize merging (or for that matter, heaps) ?
    • Nevermind, I figured it out. You need to estimate the median (like quicksort) to parallelize merging to four cores. If anyone still has suggestions for heaps, I'd like to hear it though.
    • Re: (Score:3, Interesting)

      I've never seen it done before, but I don't imagine it would be particularly diffucult to pull off. I'd say just add another parameter that tells the function how deep in the call stack it is. If it's less than x layers deep, fork with the child process going on to the other processor. The parent process sorts the first half of the list, and the child process sorts the second half. When the parent process finishes, it waits for the child process to finish (if it hasn't already), then collects the child'
    • by TERdON ( 862570 ) on Sunday February 25, 2007 @03:26PM (#18145440) Homepage
      I have studied a course in parallel algorithms, including localized parallel merge sort (the algorithm you requested). It can be used to subdivide parallelized sorting theoretically unlimitedly. Links: Course homepage [www.tuhh.de], The relevant chapter (PDF) of the course slides, with nice pictures and everything [www.tuhh.de].
      • Re: (Score:3, Informative)

        I notice that the course homepage talks about designing for an "idealised parallel computer" -- did it take into account the huge speed hit that comes from false sharing on real processors? That should be a major consideration if you are designing shared-memory parallel algorithms, since they can degrade performance by 1700% on modern processors [blogspot.com]. If you're not careful, your parallel algorithms could end up being much slower than single-threaded ones simply because they force so many cache bounces.
    • Since the article has nothing, I wanted to ask: has anyone implemented merge sort where you merge from both the front and back using two simultaneous threads ? I haven't seen this anywhere, but it's an obvious improvement for multicore machines. Does anyone know how to further parallelize merging (or for that matter, heaps) ?

      There are entire classes of sorting algorithms for multiprocessor machines, including algorithms optimized for shared versus non-uniform memory architectures, message passing, sortin
      • To go even one step better you can use a sorting networks and get O(lg n) complexity. Of course you need a network of size O(n lg n). Given a mesh-based computer you can sort N^2 items in O(n) time. But for now most of us are stuck with serial computers with small attempts at parallelism.
  • animation (Score:2, Funny)

    by hansamurai ( 907719 )
    Let me know when there's a video on Youtube of the sort doing it's thing with colored bits and animation.
    • by Ossifer ( 703813 )
      You can be sure such a video would be just as filled with ads, product placements, etc.

      And just as unscientific...
  • Against my better judgment, I skimmed the article, code.

    Two observations:

    1. Why no analysis of impact on memory usage? 10x speed-up, but at what cost? Omitting this detail makes it difficult to weigh the utility of the algorithm.

    2. Not an Apples to Apples comparison. In the source, the "BitFast" sort converts the entire link-list into an array before performing the sort. Without even considering the merits of the actual algorithms, the Mergesort has no chance of being faster given the more complicate

  • by aprosumer.slashdot ( 545227 ) on Sunday February 25, 2007 @03:18PM (#18145372) Homepage
    You're making the baby Knuth cry. Please read "The Art of Computer Programming" before proceeding any further.
    • by vadim_t ( 324782 )
      If Knuth [wikipedia.org] looks like a baby to you, then you've seen some really scary ones. My condolences.
    • Ideally, I'd like a tag for the clump of "Computer science by people who haven't done the background reading" posts we get. I haven't seen "my algorithm compresses everything, even compressed files!" or "I've proved that P = NP!" posts in a while. But we should be prepared.

      How about "badcomputerscience"?
  • GPL-ed algorithm? (Score:3, Interesting)

    by Wilson_6500 ( 896824 ) on Sunday February 25, 2007 @03:22PM (#18145400)
    I've done a bit of scientific programming, but I'm not a computer-science-oriented person by any stretch. Thus, I'm a little confused by this. Under what license are things like Quicksort and Stupidsort (for another, very bad example) "released"? Why would a person feel it necessary to release an algorithm under any kind of license at all, in fact? No textbook would want to include GPL-ed code--wouldn't that make the entire book GPL by nature of the license?

    Please correct me if I'm mistaken, but this seems like a fairly bad precedent to set.
    • Re:GPL-ed algorithm? (Score:5, Informative)

      by Helios1182 ( 629010 ) on Sunday February 25, 2007 @03:33PM (#18145494)
      You can't copyright an algorithm, so there is no license to worry about. This is a well meaning but utterly clueless person who reimplemented a well known algorithm and then made a strange constant-factor instead of asymptotic analysis.
    • As far as I understand, source code can be placed under the GPL, but an algorithm cannot be. An algorithm is just a concept - it can be patented of course, but that's aside the matter here. Unless I'm mistaken, you could write your own implementation of BitFast and use it for commercial purposes, closed-source projects, whatever you want as long as you don't use their source code.
    • No textbook would want to include GPL-ed code--wouldn't that make the entire book GPL by nature of the license?

      Not as long as you provide the source (which of course you would; it's the whole point) and don't impose any additional restrictions on the license. If you modify GPLed code, then you have to release the revised source if you distribute the binary.

      That said, a lot of nonsensical FUD has been circulated regarding the requirements of the GPL by parties who feel their interests are threatened by it. As with everything, it's best to read it for yourself.

  • by grimJester ( 890090 ) on Sunday February 25, 2007 @03:37PM (#18145522)
    Ok, I'll start from his site [phoenixbit.com]:

    - Programming Skills : Visual Basic ***Excellent***

    Yes, that certainly is... excellent.

    - Message : "Don't ever let school, stop you from learning!"

    School could, learn you some grammar.

    The algorithm is being released under the GPL ( General Public License ). The algorithm belongs to PhoenixBit and VirusFree but you may use/modify it freely.
    *** DO NOT COPY ANYTHING FROM THIS PAGE TO ANY OTHER PAGE. IF YOU WANT SOMEONE TO READ THIS THEN LINK TO THIS PAGE ***


    In addition to trying to apply copyright to an algorithm, doesn't a restriction on copying defeat the purpose of releasing something under the GPL? Or does text in all caps trum previous text not in all caps?

    Feel free to add to this. If there are some clips of this guy with lightsabers, pleas post them.
  • by belmolis ( 702863 ) <billposer AT alum DOT mit DOT edu> on Sunday February 25, 2007 @03:44PM (#18145560) Homepage

    This is a variant of RadixSort, which is well known to be faster than any comparison sort such as MergeSort. The problem with non-comparison sorts is that as such they are restricted to sorting items representable a unstructured bit fields, which means, essentially, integers. A large part of the time, the real problem in sorting is (a) extracting the fields that you want to use as keys (since it is not generally the case that you want to sort on the entire record) and (b) arranging for each pair of records to compare as you need them to, which involves both recognizing the internal structure of keys (consider the case of dates) and imposing suitable weights for the individual components. In other words, in many situations the bulk of the code and time are devoted to parsing and transformation of records. So long as you are not using a really bad algorithm, the time devoted to the sort itself is likely to be a small percentage of the total time.

    For example, I have written a sort utility [billposer.org] that differs from most others in its ability to parse the input into records and records into fields and in the transformations it can apply so as to use unusual sort orders and sort on things like dates, month names, and numbers in non-Western number systems. It was originally written for alphabetizing dictionaries of "exotic" languages. It is frequently the case that the time devoted to the actual sort is less than 1% of the run time.

    In sum, non-comparison sorts have a niche but are of limited utility because they get their speed from making use of additional information that is only available for a limited set of datatypes. For the great majority of applications, only comparison sorts are flexible enough to be of use.

    • by Animats ( 122034 )

      While radix sorts are limited, binned sorts, which are a superset of radix sorts, are more powerful. Binned sorts do work on a broad set of data types, but you have to use statistical techniques to dynamically pick a good distribution among the bins. SyncSort [syncsort.com] developed this technique in the 1960s, and had the first software patent, now long-expired. Mainframe batch systems have used such techniques for decades.

      The general idea is that if you're sorting, say, text strings, you watch them go by for a whi

  • Then again, his algorithm for becoming a loser may truly be optimal...

    Hobbies : Loop ......................AddHobbie ("Computers"); ................Do Until 1=2
    -Chris
  • by sholden ( 12227 ) on Sunday February 25, 2007 @04:09PM (#18145764) Homepage
    Retard who thinks you can GPL an algorithm manages to reinvent an algorithm first implemented as a computer program in the 1950s, and used way before then outside of digital computers.

    Also manages to not know about memset and write C code that assumes sizeof(long)==4 and puts non-inline code in header files. I don't dare look at the C++ code since there's a large probability it will cause permanent brain damage.

  • Perhaps coming from a math background, my priorities differ from those with a CS background, but...

    Where's the proof?

    Is it peer reviewed? Has it been published? Does it exist?
    Although the lack of a proof does not make your algorithm is incorrect, I'd be suspicious. In fact, I would be unwilling to use any algorithm in my code without being sure that a good proof of its correctness exists somewhere.
    Proofs are more than formalities, they're essential: without a good proof, you're just a crank sellin
  • Buffer overflow (Score:2, Insightful)

    by TheMoog ( 8407 )
    Unless I'm very much mistaken, there's a blatant buffer overflow in this code. In bitfast.h the "Tail" and "Head" arrays are defined as having 65535 (0xffff) entries. That's from 0-65534 inclusive.

    The code in "BitFast" then uses the array as if it can access Head[0xffff] and Tail[0xffff] - which is one past the end of the arrays. I'd guess it works by coincidence on Win32.

    Additionally, although there might be some reason I can't follow at the moment, in the float version only the top 15 bits of the value
  • 1. Loop ......................AddHobbie ("Computers"); ................Do Until 1=2 Since he says he's "**Excellent**" in Visual BASIC, one could ponder how he MIXED UP THE Do AND Loop STATEMENTS. 2. Don't ever let school, stop you from learning! He apparently never had... 3. 10 times faster than MergeSort? That's asymptotically, right? O(10nlogn). Call the press. (Doesn't matter that Radix Sort actually takes O(n), as long as it's 10X!) 4. BitFast start by creating two arrays of size 65535 ( pow(2,16
  • Who knew Webster's dad was such a great hacker?
  • He is from Cyprus. Most likely English is not his first language. Please forgive his typos and grammar mistakes. Looks like he is hobbyist programmer who did not undergo any formal comp sci training. So he stumbled on to radix sort and thinks he has invented something new.

    I am not saying he is Ramanujan [wikipedia.org] or anything like that. But still every amateur deserves to be forgiven once.

    • No, usually they get slapped down hard once or twice by somebody more knowledgeable, and the ones who are actually smart and capable of learning wise up and don't let it happen again. It's part of the lengthy process of going from "newbie" to "professional". Painful, to be sure.
  • Memory leaks... (Score:4, Interesting)

    by fprog26 ( 665694 ) on Sunday February 25, 2007 @04:41PM (#18146044)
    Inside bitfast.h:
    long *Tail = new long[65535];
    long *Head = new long[65535];

    for(n = 0; n < 65535; n++){ Head[n] = 0; Tail[n] = 0; }

    Memory leak of 128KB for each time the function "Node* BitFast(Node* HeadNode,long run)" is called.

    MISSING: delete[] Tail; delete[] Head;

    Furthermore the following is faster or use memset:
    long Tail[65535] = {0};
    long Head[65535] = {0};

    Unless you are running this in DOS16, DOS32 with Pharlap or smaller 8051, 68000 processor, you do not need to use new[] or malloc() for this.

    In InsertToList, Node *tmpP = (Node*)malloc(sizeof(Node));
    free(tmpP) missing inside FreeList()... no such function

    In main.c:
    Node *L1 = malloc(sizeof(Node));
    Node *L2 = malloc(sizeof(Node));
    MISSING: free(L1); free(L2);

    In main.cpp:
    Node *L1 = new Node;
    Node *L2 = new Node;

    Instead use (no need to use heap, use stack memory):
    Node L1 = {0}, L2 = {0};

    Mixing the usage of std::cout and printf() is not recommanded.
    Use all of the former or the later, else you will have weird display on some run, unless you flush the output buffer. I suggest you use printf() all the way.
    cout << "Ok!!!!" << "\n\n" << "Merge Sort : " << tim1 << " ms\n" << "BitFast : " << tim2 << " ms\n\n";
    becomes:
    printf("Ok!!!!\n\nMerge Sort : %ld ms\nBitFast : %ld ms\n\n", tim1, tim2);

    Furthermore, your implementation of link list, and calling Length(Node*) for every equal is highly inefficient, requires O(n). Store the link list length somewhere when inserting. EqualSplit takes O(1.5n) instead of O(0.5n) because of that.

    Something like:

    typdef struct LinkList {
      Node  head;
      int   length;
    } LinkList;

    As a result, MergeSort recursively call itself, which calls EqualSplit every turn.

    You should make your header file C friendly, and avoid mixing // with /* */, malloc and new[]

    No clue why this header is needed:
    #include <iso646.h>  instead of not/and use use ! and &&
    before:  if(not p) return NULL;
    becomes: if(!p) return NULL;

    Finally, BitFast is not recursive; therefore, you should try to find an implementation of merge sort which is not recursive also, so you save on function calls, if possible.

    "However, iterative, non-recursive, implementations of merge sort, avoiding method call overhead, are not difficult to code." -- Wikipedia

    http://en.wikipedia.org/wiki/Merge_so rt

    Your function should be in a file.c or be inline.
  • Merge sort is optimal. A good implementation will be as fast as any other O(n log n) algorithm, maybe
    withing a factor of two, as long as general purpose CPUs are used.

    It seems this is an O(n) algorithm, of which there are two: Bucket-sort and Radix-sort. No other ones are possible. These are better in theory and practice, but have limited applicatbility. i.e. do not work for all input data. Merge-sort is an universal sorting algorithm.
  • by wtarreau ( 324106 ) on Sunday February 25, 2007 @05:34PM (#18146482) Homepage
    Don't use this code !

    First, there are multiple bugs in the code itself :

    1) everywhere, it is assumed that 2^16 = 65535 and not 65536. Arrays are allocated for this size too, but are used with 65536 entries. (un)fortunately for him, the space is probably allocated anyway so the program does not crash on this.

    2) a similar bug in the random number generator carefully avoids dangerous values, because he generates numbers between min and max by doing a modulo on max-min , which prevents max from being produced. This is not a
    big deal anyway, just that it hides the bug.

    3) he assumes everywhere that "long" is unsigned and "short int" too. This makes the code SegFault on Linux with negative array indexes. The fix is obvious, though.

    And most of all : times are measured in optimal conditions. Indeed, this algorithm is
    just a radix sort with a large radix (2^16). A radix algorithm is efficient when the number
    of elements is high compared to the radix, because at every level, all radix entries have to
    be scanned for existing values. In his example, he sorts 4 million elements. His code is faster
    than the merge sort on 4 million entries, just as would be any radix sort. But on smaller sets,
    it still takes the same time per level, because it has to scan all 65536 entries even if there
    are only 100 numbers. The fact is that for less than 10000 values, the mergesort becomes faster in his own code.

    When measuring algorithms complexity, we use the O(x) notation. The time per iteration is never
    considered. When the time matters (as in his example), we should consider the time per iteration (T), and the total time (t) will follow :

          t = T . complexity

    In complexity, his algorithm is O(N) on 16 bits, but the time per iteration is very high. Other sorting algorithms are O(N.log(N)). So the total time is :

          t1 = T1 . O(N) for his algorithm, and :
          t2 = T2 . O(N.log(N)) for the mergesort.

    But when T1 > T2.log(N), he looses, which is almost always because log(N) is small for 16 bits, and T2 too because operations are very simple. In his case, T1 is very high because it represents the time required to scan 65536 values.

    It's amazing that he fell in such a beginner's trap. I suppose that he's still a student, which would explain the total lack of portability and newbie's bugs in the code.

    Well tried anyway :-)

    Willy
    • Re: (Score:3, Informative)

      Good points. But, we should give the guy some credit, he does state in the readme:


      This examples are for Proof of Concept only.

      They are not the most error free,optimized,commented,beautifull code
      to have ever been written


      Personally I would never write code like this, proof of concept or not, avoiding the portability problems for example would not have taken more time to write. But pressumably more experience. So probably the best thing to do is be constructive about it.

      I'll give it a shot on some of the issue
  • by Goaway ( 82658 ) on Sunday February 25, 2007 @07:11PM (#18147212) Homepage
    Maybe this guy has re-invented radix sort? I can't really tell! I wish somebody would post and tell me!
  • by RallyDriver ( 49641 ) on Sunday February 25, 2007 @07:20PM (#18147294) Homepage

    1. Although it's technically O(n) compared to QuickSort's O(n log n), consider the fact that

    a. the number of passes required by radix sort is proportional to the key size, and that
    b. if the keys are primarily unique, then the key length will be O(log n) .... thus radix sort is actually O(n log n) when applied to practical cases, hence why everyone uses quicksort or mergesort instead.

    2. One cute feature of radix sort for very large data sets is that you only need linear access to the data, so it can be done with non-random-access external storage. The one practical implementation I've ever heard of used four 9-track tape drives.

    However, considering use in main memory, both radix sort and quicksort make efficient use of a D-cache that is 2-way set-associative or better, and both hit the whole data set once per pass -> if it swaps, it'll swap a lot :-)

     

If money can't buy happiness, I guess you'll just have to rent it.

Working...