Hey guys! I need I quick help
I heard about a sorting algorithm that I want to use but I can't find any info about it..so if any of you know the name of this sort or a place where I can read about it, it would be great:icon_mrgreen:
what the algorithm does is-
sorts n items using an array by the size of the biggest key
the items go in to the array acourding to it's key..
let's say the keys are - 1,3,6,9,12
the array would look like this-
_|1|_|3|_|_|6|_|_|9|_|_|12|
and then we put it all in an array by the size of number of items like that-
1|3|6|9|12|
and it's sorted :cool:
it also can just increase the counters in the array like that
0|1|0|1|0|0|1|0|0|1|0|0|1|
really need the info today!! any help would be highly appreciated!!!

Edited by arcticM: n/a

5
Contributors
4
Replies
5
Views
8 Years
Discussion Span
Last Post by SasseMan

Looks like a pigeonhole sort to me.

I didn't quite understand what you meant in your description but it sounds awfully familiar especially the part about storing the number of occurences if I read right, like a counting sort which is a very quick sort but memory wise can get out of hand due to the array having a pre-known boundary that must be created.

The first step is hashing, using the data/keys as array addresses. The second step just reduces the sparseness by gathering the items.

You calcualate the hash keys numerically from the data and store it according to the keys. To find something, calculate its key and retreive it from the array. It's sorted in that you can always find it again in one step even though things are scattered about...

Normally, the first step is all that is used so that the keys would be self-indexing. In the second step, you lose that advantage. Of course, the ideal hash function must have the right balance between sparseness and collisions. You could possibly get a duplicate key and must keep track of that in a collision table. I think this is what terabyte hard drives are good for... To be useful some other data must be stored along with what was used to calculate the key. Typically these are other columns in a database besides the ones indexed.

It sound to me like you are describing bucket sort, or radix sort where you use the entire integer to place in buckets instead of parts of it. It is fast but not very space efficient. Runs practically in O(n) (linear) time.

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.