How to calculate time complexity of any algorithm or program ....
need help on this..

Recommended Answers

All 114 Replies

>How to calculate time complexity of any algorithm or program ....
The most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N as N approaches infinity. In general you can think of it like this:

statement;

Is constant. The running time of the statement will not change in relation to N.

for ( i = 0; i < N; i++ )
  statement;

Is linear. The running time of the loop is directly proportional to N. When N doubles, so does the running time.

for ( i = 0; i < N; i++ ) {
  for ( j = 0; j < N; j++ )
    statement;
}

Is quadratic. The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.

while ( low <= high ) {
  mid = ( low + high ) / 2;

  if ( target < list[mid] )
    high = mid - 1;
  else if ( target > list[mid] )
    low = mid + 1;
  else break;
}

Is logarithmic. The running time of the algorithm is proportional to the number of times N can be divided by 2. This is because the algorithm divides the working area in half with each iteration.

void quicksort ( int list[], int left, int right )
{
  int pivot = partition ( list, left, right );
  quicksort ( list, left, pivot - 1 );
  quicksort ( list, pivot + 1, right );
}

Is N * log ( N ). The running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic.

In general, doing something with every item in one dimension is linear, doing something with every item in two dimensions is quadratic, and dividing the working area in half is logarithmic. There are other Big O measures such as cubic, exponential, and square root, but they're not nearly as common. Big O notation is described as O ( <type> ) where <type> is the measure. The quicksort algorithm would be described as O ( N * log ( N ) ).

Note that none of this has taken into account best, average, and worst case measures. Each would have its own Big O notation. Also note that this is a VERY simplistic explanation. Big O is the most common, but it's also more complex that I've shown. There are also other notations such as big omega, little o, and big theta. You probably won't encounter them outside of an algorithm analysis course. ;)

commented: WOW incredible post, most of it over my head but all the same, incredible! +4
commented: Excelent brief +3

Big Oh denotes "fewer than or the same as" <expression> iterations.
Big Omega denotes "more than or the same as" <expression> iterations.
Big Theta denotes "the same as" <expression> iterations.
Little Oh denotes "fewer than" <expression> iterations.
Little Omega denotes "more than" <expression> iterations.

Thanks for the lovely explanation Narue. I have a couple of questions though.

I assume that when you said the complexity for the following code was O(N log (N)), you meant the average case measure?

void quicksort ( int list[], int left, int right )
{
  int pivot = partition ( list, left, right );
  quicksort ( list, left, pivot - 1 );
  quicksort ( list, pivot + 1, right );
}

If I had to calculate the best and worst, how would I do that?

I tried to do best, and correct me if I'm wrong but that would be when the list has 1 element, so the best case scenario would be O(1)?

>I assume that when you said the complexity for the following code was
>O(N log (N)), you meant the average case measure?
Correct. Average and best case complexities are easier to figure out than worst case.

>If I had to calculate the best and worst, how would I do that?
To calculate the best case, simply assume a perfect partition right down the middle each time. This will roughly halve the data set and the time complexity is O(Nlog N), just like the average case, except it's a tighter bound, so while Big O is correct, you can use Big Theta to be more correct.

To calculate the worst case, assume the worst possible partition, which would basically be the left or rightmost item, so one half contains a single element and the other half contains everything else. It's fairly easy to see that this is linear, so the worst case time complexity is O(N).

>I tried to do best, and correct me if I'm wrong but that would be when
>the list has 1 element, so the best case scenario would be O(1)?
You could say that, but Big O notation doesn't assume the number of elements. It's a measurement of growth as the data set gets larger. So O(1) means that if you double the number of elements, the time the algorithm takes is the same. O(N) means that if the number of elements doubles, so does the time the algorithm takes to process them.

Thanks for that narue.

One more question. I have understood how to calculate the O of a simple algorithm but does that O depend upon the size of N as well? I read in an online book somewhere that O will have two values for the same algorithm, one when N was very large and another when N is small. Is that correct? If so, how do I calculate the value of say,

"3 n^2 - 100 n + 6" for a very small n?

I know the complexity for the above will be O(n^2) for very large n but what about very small n?

O notation removes all constants so that you're left with the largest growth rate, which will be dominant as N increases. Naturally, when N is small, the constants will be the dominant factor, so it can make sense to keep them in the equation rather than removing them if N is guaranteed to be small.

>I know the complexity for the above will be O(n^2) for very large n but what about very small n?
It will still be O(n^2), but if n is 5, for example, 100n totally dominates the algorithm as opposed to if n is 5000, where n^2 dominates.

That's what I thought but this page says different, rather the figures on the page say different. It's got me confused.

>rather the figures on the page say different
No, they say exactly the same thing. O notation is an upper bound, not a tight bound. And without assuming the size of N, the constants are irrelevant.

But theta is a tight bound right? So why does the theta of the same equation vary?

Sorry but I'm new to this and I think it's all in the english. My english isn't that great, one big reason why I'm having problems.

If you feel like reading this post slowly and carefully, it will describe what this notation _really_ means.

All functions have some kind of behavior as n grows towards infinity. For example, if f(n) = 1/n, then as n grows towards infinity, f(n) gets closer and closer to zero. Whereas if f(n) = n*n, then as n grows towards infinity, f(n) grows too.

Functions can grow at different speeds. If two functions are equal, then they obviously grow at the same speed. But wait, there's more! Two functions are deemed to grow at the same speed if they're separated by a constant multiple! For example, if f(n) = n*n and g(n) = 3*n*n, then f and g are deemed to grow at the same pace, because g(n) = 3*f(n), so they are only a constant multiple apart. That is, g(n) / f(n) = 3, as n grows arbitrarily large.

But consider the scenario where f(n) = n * n, and g(n) = n. Then what is the behavior of f(n) and g(n) as n grows arbitrarily large? Well, f(n) / g(n) = (n * n) / (n). Which simplifies to f(n) / g(n) = n. Which means that as n grows large, f(n) = n * g(n). What does this mean? f and g are in this case not separated by a constant multiple. The multiple between f and g grows larger and larger as time progresses, without stopping. We say that "f grows faster than g" when this happens. Or "g grows slower than f". Just try some sample values of n: Try 10, 100, and 1000. First, f(10) / g(10) = 100 / 10 = 10. f(100) / g(100) = 10000 / 100 = 100. f(1000) / g(1000) = 1000000 / 1000 = 1000. We can easily see, hopefully, that the ratio between f and g is not constant.

Now consider the scenario where f(n) = 2 * n * n + 1, and g(n) = n * n. In this case, our functions have the ratio f(n) / g(n) = (2 * n * n + 1) / (n * n). What is the value of this ratio as n grows towards infinity? The answer is 2. Let's simplify the expression:

(2 * n * n + 1) / (n * n) =
(2 * n * n) / (n * n) + 1 / (n * n) =
2 + 1 / (n * n).

So f(n) / g(n) = 2 + 1 / (n * n).

So as n grows large, the term 1 / (n * n) gets arbitrarily (or rediculously) small. As n grows large, then, the value of (2 + 1 / n * n) gets closer and closer to 2.

We could plug in some values -- let's try 10, 100, and 1000 again.

f(10) / g(10) = (2 * 10 * 10 + 1) / (10 * 10) = 201 / 100 = 2.01
f(100) / g(100) = (2 * 100 * 100 + 1) / (100 * 100) = 20001 / 10000 = 2.0001
f(1000) / g(1000) = (2 * 1000 * 1000 + 1) / (1000 * 1000) = 2000001 / 1000000 = 2.0000001.

So the ratio between these two functions approaches a constant value as n grows large. Hence, f and g are said to grow at the same pace.

In comparing the growth rates of functions, we have these rules:

1. If f(n) / g(n) grows out of control, getting larger and larger as n gets larger, then f is said to grow faster than g.
2. If f(n) / g(n) settles towards some constant positive value, then f is said to grow at the same pace as g.
3. If f(n) / g(n) gets closer and closer to zero, then this means that its reciprocal, g(n) / f(n), is growing out of control, so g is said to grow faster than f. (Or f is said to grow slower than g.)


Now on to big O notation!

Big O notation actually refers to an entire _set_ of functions. The notation O(expression) represents the entire set of functions that grow slower than or at the same pace as expression. For example, O(n^2) represents the entire set of functions that grow slower than or at the same pace as n^2.

In other words, if g(n) = n, then since n grows slower than n^2, it follows that g lies in the set O(n^2). Likewise, if h(n) = 2 * n * n + 1, then since h grows at the same pace as n^2, it follows that h lies in the set O(n^2).

It's also true that h lies in the set O(n^3), because h grows slower than n^3. (Assuming the leading term is positive, quadratic functions always grow slower than cubic polynomials, which always grow slower than fourth-degree polynomials, which always grow slower than fifth-degree polynomials, and so on.)

Because O(expression) represents all the functions that grow _slower_ than or equal to expression, it is used to represent upper bounds.

The other notations refer to different sets of functions. For example, o(expression) represents all the functions that grow slower than expression (and not at the same speed.) f(n) = n^2 + n - 1 lies in the set O(n^2), but it doesn't lie in o(n^2). g(n) = n lies in both.

Theta(expression) represents all the functions that grow at the same rate as expression.

Omega(expression) represents all the functions that grow faster than or equal to expression.

In computer programs, we are of course interested in how much time it takes programs to run, and these forms of notation are useful for representing that, so that's why we use them. When you say an algorithm runs in O(expression) time, you're just saying that the algorithm's runtime is no worse than some multiple of expression. When you use Theta, you're saying that the algorithm's runtime is some multiple of expression. And Omega is used to say that the amount of time an algorithm takes to run grows at a rate that is larger than or equal to some expression's rate of growth.

(Disclaimer: some of the statements in this post are not strictly true, mathematically speaking. One that I noticed is that big O/Theta/etc notation uses a slightly different meaning of "grows slower" and "grows at the same speed" and "grows faster" than the ones I gave.)

commented: incredible post, make me want to move farter and deeper into computers! +4

Wow, I'm going to copy and keep that. Thank you very very much.

Quick question, since theta is an exact rate of growth, it can't have two values right, for the same algo/function?

I don't understand your question; maybe rephrase it. But maybe this will answer it:

O(expression) is the set of functions that grow slower than or at the same rate as expression. Omega(expression) is the set of functions that grow faster than or at the same rate as expression.

Theta(expression) consist of all the functions that lie in both O(expression) and Omega(expression).

Now here's one of my attempts to target directly at what your question seems to be saying:

Suppose you've calculated that an algorithm takes f(n) operations, where f(n) = 3 * n^2 + 2 * n + 4. Since this polynomial grows at the same rate as n^2, then you could say that the function f lies in the set Theta(n^2). (It also lies in the sets O(n^2) and Omega(n^2) for the same reason.)

Here are some false statements:

f lies in Theta(n).
f lies in Theta(n^3).

These are false because the function f grows faster than n and grows slower than n^3.

However, here is a true statement:

f lies in Theta(n^2 + 1).

This is true because the function f grows at the same pace as n^2 + 1. Here are some other true statements:

f lies in Theta(2 * n^2).
f lies in Theta(3 * n^2 + 2 * n + 4).
f lies in Theta((n^2) / 45).
f lies in Theta(n^2 + 1 - 1).
f lies in Theta(n^2 + log(n)).

In each case, the function f grows at the same pace as the inside expression. (In one case, they're equal.)

So to answer your question: A function or an algorithm's runtime can be depicted by different theta expressions. However, these are equal theta expressions. Remember that Theta(n^2) refers to the set of all functions that grow at the same pace as n^2. Well, Theta(2 * n^2) refers to exactly the same set! (If a function grows at the same pace as n^2, then it also grows at the same pace as 2*n^2.)

Theta(2 * n^2) = Theta(3 * n^2 + 2 * n + 4) = Theta((n^2) / 45) = Theta(n^2 + 1 - 1) = Theta(n^2 + log(n)) = Theta(n^2).

If an algorithm's runtime has a theta bound, you can write the theta expression in different ways on _paper_, like above, but they mean the same thing. A function cannot have two _different_ theta bounds. (Different in that Theta(n^2) and Theta(n^3) represent completely different sets of functions.)

I would be writing the actual Greek character theta if I knew how to type it, by the way; writing out the word "Theta" is not normal.

Sorry, that was a stupid question. I understood my own question on a second read of your earlier post, thanks again.

Verifying if anyone could give me a hand solving this excersices:

Instructions:
Find the Big Oh for the times which is excecuted x : = x + 1. You should justify your answer:

I : = N
  WHILE    I >= 1  DO
  BEGIN
  FOR  J : =  1 TO  N  DO
    X : = X + 1
         I =  [I/2]
       END

hey every body
great conversation..!
but can you tell me how to look to the algorithm to calculate the time complexity

Count the total number of fundamental operations the algorithm performs, relative to some value N. For example, this function..

int factorial(int n) {
    int ret = 1;
    while (n != 0) {
        ret *= n;
        -- n;
    }
    return ret;
}

The function sets the value of ret to 1. This is one operation.

Then this function tests the condition of the while loop exactly 1 + n times. It runs the body of the while loop exactly n times. The body of the while loop performs exactly 2 fundamental operations: 'multiply ret by n', and 'decrement n'. The condition has one fundamental operation, comparison. Calling the function counts as a fundamental operation, too.

So we have the total number of fundamental operations being
1 + 1 + (1 + n) + 2 * n. (The sum is composed of the operations for calling the function, setting ret = 1, performing a total of 1 + n tests in the condition of the while loop, and a total of 2 * n operations in the body of the while loop.)

This simplifies to 3 + 3 * n fundamental operations, which is O(n).

(What do I mean by 'fundamental operation'? Any operation that takes an amount of time that is bounded by a constant.)

Count the total number of fundamental operations the algorithm performs, relative to some value N. For example, this function..

int factorial(int n) {
    int ret = 1;
    while (n != 0) {
        ret *= n;
        -- n;
    }
    return ret;
}

The function sets the value of ret to 1. This is one operation.

Then this function tests the condition of the while loop exactly 1 + n times. It runs the body of the while loop exactly n times. The body of the while loop performs exactly 2 fundamental operations: 'multiply ret by n', and 'decrement n'. The condition has one fundamental operation, comparison. Calling the function counts as a fundamental operation, too.

So we have the total number of fundamental operations being
1 + 1 + (1 + n) + 2 * n. (The sum is composed of the operations for calling the function, setting ret = 1, performing a total of 1 + n tests in the condition of the while loop, and a total of 2 * n operations in the body of the while loop.)

This simplifies to 3 + 3 * n fundamental operations, which is O(n).

(What do I mean by 'fundamental operation'? Any operation that takes an amount of time that is bounded by a constant.)

Since O notation discards constants, there's really little need to include them in the first place if you can see a dominant growth. Straight away you can see that the loop will dominate, and it relies on the length of n, and has no nested loops, so the time complexity will be O(n).

Not everyone is a math nerd like you. ;)

Sure, that's how everybody solves the problem, but that sort of explanation lacks clarity and makes out the process to be some sort of magic voodoo.

Besides, you're really doing the same thing I explained, only you're calculating the algorithm's complexity via the route O(1) * O(n) + O(1) = O(n), substituting big O notation a bit earlier.

>Besides, you're really doing the same thing I explained
Kinda sorta. You're counting individual operations and I'm just looking for loops and deriving the complexity from a guestimate of how many times a loop iterates.

>but that sort of explanation lacks clarity
I think it adds clarity by removing the complicated intermediate junk that overwhelms so many people. Accuracy isn't always synonymous with clarity. Take the C++ standard. It's written with accuracy in mind, and the language used emphasizes that. But I don't know anyone who would say that it's immediately clear what's being said. ;)

>makes out the process to be some sort of magic voodoo.
Not really. It's all about the iteration counts. Once you have those, a simple ruleset will give you what you want. For example, n nested in n is O(n^2). n and n not nested is O(n). log n nested in n is O(n*log n). Once someone knows the notation and what each one means, it's very intuitive. The "right" way (in my experience) confuses the hell out of people. ;)

That's not to say that the right way is unnecessary. I just think you're flip flopping the order when the simple way should be learned first, then once the concepts are understood, the right way can be introduced for a more thorough understanding.

The way I explained is the simple way; a complete answer takes only one sentence. "Count the number of fundamental operations that take place" is much simpler and less nerdy than any complicated description about iteration counts and nested loops.

I am looking on how to measure in terms of either seconds or milliseconds of the runtime of an algorithm with real data, i.e. of a program.

What would be the best implementation of this? This would either be for C or C++.

However, what I am looking for example, is something similar along the lines of how one would get a script execution time (render time), in a typical PHP program. You all have seen at the bottom of forums or other PHP based web pages and have seen that calculation. Would this be one way to implement this?

How would I implement such without using too many system dependent libraries (e.g. windows), but rather either a custom or by standard libraries?

Any thoughts? At first I thought, before calling that algorithm function, say myTestFunction(), I would make a system call to request the time, then after the function has completed it's run and returned to the original call position, to make the system call again and then do a simple subtraction somehow on the time then that would be the result. Of course, I could just be dreaming this all as I have yet to fully investigate this, but is this how it would be done?

Any other ways?

Thanks

U can calculate time complexity of an alg using foll steps.
1.Recursive alg
Form a recursive relation of the alg and solve the relation
2. Non recursive ralation
find no. of times the stmt r executed
For deatiled explanation refer Analysis and design of alg by Anany Levitin

i want to ask some questions,how con i do that?

can you please tell me how did u come to the conclusion it is a average case ? is average case = best case ??

can somebody give me an example for exponential time complexity ???
in the previous i saw the example for everything except exponential one !!

Thanks for the good explanation !! good job. ;-)

"So to answer your question: A function or an algorithm's runtime can be depicted by different theta expressions. However, these are equal theta expressions. Remember that Theta(n^2) refers to the set of all functions that grow at the same pace as n^2. Well, Theta(2 * n^2) refers to exactly the same set! (If a function grows at the same pace as n^2, then it also grows at the same pace as 2*n^2.)"

my question is ,

is it only for theta ? or it can applied to Oh and Omega also ??

O notation removes all constants so that you're left with the largest growth rate, which will be dominant as N increases. Naturally, when N is small, the constants will be the dominant factor, so it can make sense to keep them in the equation rather than removing them if N is guaranteed to be small.

>I know the complexity for the above will be O(n^2) for very large n but what about very small n?
It will still be O(n^2), but if n is 5, for example, 100n totally dominates the algorithm as opposed to if n is 5000, where n^2 dominates.

find big-Oh noyation forf(n)=n+log n

Hi ! I read all dicussion about time complexity on this page. Please tell me if there is a lengthy programm that too written in c++, how I can calculate its time complexity.:mad:

Is there some software which can do this job automatically ?:idea:

Fahim

You find the time complexity for big programs the same way you find it for small programs.

There is no software that can do this automatically.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.