int find(int x)
    {
          for(int i=0;i < n; i++)
          {
                  if(a[i]==x)
                     return i;
          }

          return -1;
    }


int find(int x)
{
          for(int i=0;i < n/2; i+=2)
          {
                  if(a[i]==x)
                     return i;

                     if(i+1 < n && a[i+1]==x) 
                     return i+1;
          }

          return -1;
}



int find(int x)
{
          for(int i=0;i < n/3; i+=3)
          {
                  if(a[i]==x)
                     return i;

                     if(i+1 < n && a[i+1]==x)return i+1;
                     if(i+2<n && a[i+2]==x)return i+2;
          }

          return -1;
}

Which is faster and why? Also, what if I increase the k in n/k in the loop ? Will it make it more faster or slower? What is concept behind this? Thanks in advance.

Which is faster and why?

It's difficult to say, but I'd wager the first would be fastest because the compiler is likely to be substantially better at unrolling loops in the most efficient manner. However, there's an implicit assumption that the loop would be unrolled, so as with any performance concerns the first step is to thoroughly profile your options.

What is concept behind this?

Loop unrolling.

BAsically, I was asked to decrease the complexity of the first code from O(n). Then I wrote second. Then he told me decrease it further from O(n/2).Then, I wrote third and he didn't say anything.

Was I correct at that time as he told me to decrease the complexity?
Secondly, in the link also, it is written that manually we can unroll it. So, why are you saying first is still fastest according to you? Thanks Sir James.

All three do the same thing. They look at every element in the array, one at a time, and stop when they find the value you're looking for. You have removed a little bit of loop overhead from the second, and a little bit more of the loop overhead from the third, but they are all still O(n); in the commonly used mathematical shorthand, O(n) and O(n/2) are the same thing, and the complexity of all three is the same.

A modern compiler is good at optimising, and probably better at it than you are with something this simple. Also, you've introduced bugs; what if the size of the array is not a multiple of two (in the second case) or a multiple of three (in the third case)?

If you really need the find to be fast, because you're going to be doing it a lot, invest extra time in advance and instead of using a plain, unsorted array, either sort the array or use a hash table (or other such improved, fast-finding storage).

Scalar expressions such as the first is more efficient as every time you do some math (especially division) in your code, it adds overhead. The only advantage here is if you can reduce your search space. In the second expression, instead of n/2, use n>>1, which is the same as dividing by two, but just shifts a register, hence is likely more efficient in time. The only issue there is whether your processor instruction set will drop a bit that "falls off the end", or wrap it around. Caveat programmer! And do have fun experimenting with this stuff.

The third is almost certainly the fastest (and the second the second fastest) as it only iterates over a third of the array (and the second over half). However this also means that the third and second functions don't do what you intend. If they did, there would unlikely to be too much difference between the performance of the three.

As far as asymptotic complexity is concerned, the previous answers already correctly pointed out that all three are O(n) and there's no point in saying something like O(n/2) (as it's just a more complex way of saying O(n)).

In the second expression, instead of n/2, use n>>1

I'd strongly advise against that. The two are very likely to generate the exact same code, but the former expresses the intent much more clearly. Also in order to be correct the functions shouldn't divide n at all.

Is there something left out about the format of the data you are searching?

There is no way to search random data faster than O(n).
However, if your data were sorted then you could search it in O(log n).

Also, every time you add extra code to do something the loop already does, you increase the amount of time it takes to work.

What is this for? An algorithms class? Or basic CS?

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.