Hey guys,

I'm reading this thing about "Time Complexities". But I'm having a hard time wrapping my mind around it. I know that it's all about the different algorithms that take different amounts of time for different amounts of instructions and stuff. What I don's understand is that how to use it...you see I don't even know what exactly to ask for :)

Any help would be appreciated

Recommended Answers

All 4 Replies

"Time Complexities" is really just a measure of what running times you should expect as a function of the amount of data you are working on. It is not a precise measure (at least it rarely is), and can't really be "used". It's more like a measure of how good an algorithm is (often, compared to another algorithm or to how "difficult" the problem is).

For example, if I have an array that I want to fill with 0 values, then I would go through each element of the array and set it to 0. I don't really know how much time it takes to set one element to 0, but I know that it takes the same amount of time to set any one element or another to 0, in other words, the time for this operation is a constant, lets call it A. Then, since I will go through each of the N elements of the array, the time that it would take is A*N. But since I really don't know A (without doing performance tests on my computer), I just say that the operation is proportional to N, another way of saying it is that it is of the order of N, and hence, the big-O notation O(N).

The point is that this measures how scalable an algorithm is. Since any problem is trivial if N is really small (i.e. if you have 5 elements to sort, it doesn't matter much what algorithm you use, it is going to take a ridiculously small amount of time), that doesn't matter much. What does matter is when N gets very big. If one algorithm is O(N) and another is O(N^2), then using the first algorithm is much faster when N is large, regardless of the constant unknown term A. That's why it is a good measure of performance. Note also that if you have different inputs of different sizes, they can go into the big-O notation too, like for example, multiplying a matrix of MxN with a vector of N elements, you would expect a complexity of O(M*N).

Typically you will hear the "Worst-case Time Complexity" and the "Expected Time Complexity". The worst-case is if you imagine that you have an evil adversary of the algorithm that gives it, as input, the worst possible data (and if the algorithm is not deterministic, you also assume the most "unlucky" run), i.e. this is called the pathological case of an algorithm, and then, figure out what time complexity is going to result from that and call it the worst-case time complexity. On the other hand, the expected time complexity is basically the average time complexity you would expect for typical or expected inputs. For example, a pathological case for many simple sorting algorithms is an array of elements that are already sorted but in the exact reverse order, while, sometimes, the expected cases are often either a uniform random distribution of the elements or a partially sorted array.

Of course, at the end of the day, when you apply a method to a particular problem, you will have a concrete number N (and whatever other numbers go in the O() formula for the algorithm), which won't be infinite. So, sometimes the "worse" algorithm (as measured by its big-O) could turn out to be faster than a "better" one for particular problem (typically if N is small, the unknown constant term can make a big difference, but that requires you to test the different algorithms for your specific problem). But, in general, if you don't want to bother with all the testing and micro-optimization, or you are not sure of the exact circumstances in which you are applying it, then you just pick the algorithm that has the least time complexity and it should be pretty good.

commented: Good explanation +7

Thanks, mike_2000_17.
Makes sense. I'm trying to write some simple lines of codes for those algorithms, like: O(1), O(n), O(n^2) and so on... like this:

O(n^2):
------

int count = 0,
		count1 = 0,
		count2 = 0,
		N = 30;
	for(count1 = 1; count1 <= N; count1++){
		for(count2 = 1; count2 <= N; count2++){
			cout << count1 + count2;
		}
	}
	count = count1 + count2;

O(n):
------

int sum = 0,
	 i = 0;
 const int N = 30;
 while(i < N){
	sum += i;
	i++;
 }

are these right?

But, I am not getting the constant time complexity (O(1)), the logarithmic ones quiet well.

Any ideas?

Thanks

A constant time algorithm is trivial:

int main() {
  int i = 0; //this "algorithm" is O(1)
  return 0;
};

Constant-time just means an algorithm (or part of an algorithm) that takes the same time to execute without any dependence on the number of elements or size of data or anything like that. For example, a simple (naive) algorithm to take the average of an array could be:

int main() {
  int numbers[] = {1,2,2,4,6};
  int sum = 0;

  for(int i=0; i < 5; ++i)
    sum += numbers[i];      //taking the sum is O(N), here N = 5.

  double avg = sum / 5.0;   //computing the average from the sum is O(1).
  return 0;
};

Also, note from the above, the "standard" or at least "usual" way of writing a for-loop in C++ (arrays are indexed from 0 to N-1).

In the little code above, the first "algorithm" (taking the sum) runs in O(N) because the more data you have the more time it takes, linearly proportional. The second "algorithm" (getting the average from the sum) is O(1) because it takes the same time regardless of the amount of data. But, the overall algorithm takes O(N) because the constant part is negligible compared to the linear part, this is why you don't see things like O(N^2 + N + 1) in the "order of magnitude" notation, because the fastest growing term always subsumes the others when N is taken to be large.

Thanks again, mike_2000_17.

You helped me a bunch!

From what I understand here:

#
for(int i=0; i < 5; ++i)
#
sum += numbers[i]; //taking the sum is O(N), here N = 5.

if I have inner loops, let's say total of 3 for loops, and i have an iterator to count the statistics/number of instructions/time it takes to run that loop, then my notation would be N^3 ...? or is there more to it than just than that?

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.