Dear all, I have created a code for QuicSort algorithm with random numbers and time executation but now I want to do for Countin sort And I don't know, Can anybady help me based in this example. Thanks

include <iostream>
    include <conio.h>
    include <time.h>
    include <stdlib.h> //srand and rand functions
    using namespace std;

    void add_random_numbers ( long arr[], long b );
    void QuickSort( long vektori[], long left, long right);
    long const MADH_MAX_VEKTORIT = 110000;
    int main()
    long random_vector[MADH_MAX_VEKTORIT];
    long vector[] = {1000, 10000, 15000, 25000, 30000, 45000, 50000, 60000, 75000, 90000, 100000};

    long num_elem;
    clock_t tn, tp;

    srand( time( NULL ));

    cout << "AlgorithmttNumbers_of_emelmentstttTime(s)n";

    cout << "--------tt-------------------tt-------n";

    for(long i = 0; i < 11; i++)
    num_elem = vector[i];

    add_random_numbers(random_vector, num_elem ); 
    tn = clock(); 
    QuickSort(random_vector, 0, num_elem ); 
    tp = clock();  
    cout << "Quick   ttt"<< num_elem << "ttt" << (tp - tn)/(double)CLOCKS_PER_SEC << endl;
    getchar ();
    return 0;

    void add_random_numbers( long arr[], long b )

    for( long i = 0; i < b; i++ )

    arr[i] = rand();
    void QuickSort(long vektori[],long left,long right)
    int flag=1;
    long i,j,k,temp;
    while(vector[++i]<k &&i<=right);
    while(vector[--j]>k &&j>=left);
    else flag=0;

Edited by Diellza: I changed some description

2 Years
Discussion Span
Last Post by David_50

The algorythm is pretty simple. The assumption is that there are many duplicate keys, so you can count each and deliver them, a bit like loading shelves. Each key has a first position in the output array discovered by scanning, counting, adding. As items are positioned in the output, the first position for that key is incremented. http://en.wikipedia.org/wiki/Counting_sort#The_algorithm


Dear David you mean

for x in input:
    count[key(x)] += 1

# calculate the starting index for each key:
total = 0
for i in range(k):   # i = 0, 1, ... k-1
    oldCount = count[i]
    count[i] = total
    total += oldCount

# copy to output array, preserving order of inputs with equal keys:
for x in input:
    output[count[key(x)]] = x
    count[key(x)] += 1

return output

But how to whrite in C++ to input this code in this example for counting sort can You help me?


Something like that. I interpert it that, if I did a first read of the input file, I could count the instances of each key in a container of keys and counts. Then, the new offsets are defined by (the sum of lower keys plus the sum of earlier instances of this key) times the (fixed length) item length. So, on the second sequential read, each item cnould be written to the output randomly by a seek and write. If writing the output sequentially, I would do a mmap() of the input file, and on the first pass, I could store the file offsets of every item in a container for by key. On the second pass the item to be written next can now be selected by stored offset in the first container.

Similar real life situation: I had a file of stock trading data I wanted to sort by symbol and time. I realized it was sorted by time already, so I wrote a process that copied each input item into one of 200 files in the working dir, each named for one of the first 200 symbols in the input (maybe it was 127 to make a nice b-tree). The system had an open file limit of 256, as I recall. If another symbol not in the 200 showed up, it wrote it to a pipe to a new instance of itself, which took care of the next 200 symbols. The first process took care of the most popular symbols (something like 60% of the trading here was on 'DIA' or 'OMX', I forget). The processing was small as each item just had to be matched to the local 200 symbols (7 or 8 compares). The amount of pipe writing was not excessive as each successively spawned process handled yet less popular symbols. At the end, one just had to concatenate these files in alphabetical name order. There were a lot of processes, but not much CPU activity. Since they all ran the same code and had a small heap, not much RAM needed.

This article has been dead for over six months. Start a new discussion instead.
Take the time to help us to help you. Please be thoughtful and detailed and be sure to adhere to our posting rules.