I see that there are many questions here regarding complexity analysis, specifically regarding the Big-O notation of algorithms. I will attempt to shed some light on the subject. I will start with an introduction, after which I will go over some common complexities and last we will solve a few simple examples for practice.

1. Introduction
When looking at the Algorithm, we want to know its order of growth regarding the input size given to it. Consider two simple algorithms that sort an array of [TEX]n[/TEX] cells. The first algorithm is doing [TEX]n[/TEX] operations in order to complete the sort, and the second algorithm is doing [TEX]n^2[/TEX] operations in order to complete it. If the array is of size [TEX]n=2[/TEX], then algorithm one would have done 2 operations while algorithm two would have done 4. When [TEX]n=10[/TEX] then algorithm one would have done 10 operations, and algorithm two 100 – see the difference? When talking about large values of [TEX]n[/TEX] (as in [TEX]{n\to\infty}[/TEX]) those differences will be much more significant – and we are talking just about the difference between [TEX]n[/TEX] and [TEX]n^2[/TEX], not mentioning [TEX]n^3[/TEX], [TEX]n^4[/TEX], and non-polynomial ones like [TEX]2^n[/TEX]!
This is what the Big-O notation is for – expressing the worst-case growth order of an algorithms given input data of size [TEX]n[/TEX]. Knowing if an algorithm can act as [TEX]n^2[/TEX] or at worst as [TEX]n[/TEX] can make big difference when choosing your algorithm or simply evaluating how good it is time-wise or space-wise.

This is the place to note that complexity can be evaluated by time-complexity or space-complexity. Time complexity is basically asking how much time or how many operations the algorithm did in proportion to [TEX]n[/TEX]. Space complexity is asking how much space – how many variables, array etc' have we used in proportion to [TEX]n[/TEX] in order to complete our mission.

Saying it more formally (don't be alarmed - it's a little bit of math, and if you don't get it right away read along and it might be clearer later), saying that [TEX]f(x)[/TEX], our algorithm, is [TEX]O(g(x))[/TEX] if it holds that [TEX]|(f(x)| \leq M*|g(x)|[/TEX] for all [TEX]x>x0[/TEX]. It basically means that after a certain point, [TEX]f(x[/TEX]) will always be equal or smaller than [TEX]g(x)[/TEX], when looking on the absolute value. Using the examples from before, our first algorithm, the one that completes a sort of array of size [TEX]n[/TEX] in [TEX]n[/TEX] operations, will be [TEX]O(n)[/TEX] – since we can take [TEX]M=1[/TEX] and get [TEX]algorithm(n) \leq O(n)[/TEX] for every [TEX]n[/TEX].

2. Common O's In the programming world you will encounter some Big-O notations more than others. To give you some idea about a notation when you see one, here are the most notorious ones. Note – I will be writing it as time-complexity, but the space complexity is completely identical, just with different examples.[TEX]O(1)[/TEX] – The friendliest of the bunch. [TEX]O(1)[/TEX] basically means a constant time, with no regard to the given input size. Give it an array of size 20 or array of size 20,000, it will return after the same amount of operations. An example for such an algorithm is a function which excepts an array and returns its first element.

public int returnFirstElement(int[] arr)
{
   return arr[1];
}
[TEX]O(n)[/TEX] – The algorithm is linear, or in direct proportion to the given data input. Give it input size [TEX]n[/TEX], and it will return the result after [TEX]M*n[/TEX] operations. Remember, when talking about complexity analysis we are talking about really large values of [TEX]n[/TEX], so it doesn't matter the size of [TEX]M[/TEX] because it is constant. An example to an [TEX]O(n)[/TEX] algorithm is an algorithm that takes an array and prints all of its elements: public void printAllElements(int[] arr) { for(int i = 0; i < arr.length; ++i) { System.out.println(arr[i]); } } Another example is the following algorithm which check if a certain character is located in a string public boolean isCharInside(String str, char) { boolean flag = false; for(int i = 0; i < str.length(); ++i) { If(str_arr[i] == char) { flag = true; } } return flag; } Notice that it doesn't matter whether flag=true after 1 loop iteration or never, since the loop will always finish all of its [TEX]n[/TEX] cycles before returning the flag value. [TEX]O(n^2)[/TEX] – The algorithm is proportional to the square of the data input given (by proportional I mean that you can multiply it by that constant [TEX]M[/TEX] discussed earlier) – give it [TEX]n=1[/TEX], and we will have only one operation, give it 2 – and it will jump to 4, give it 10 and it will jump to 100. The growth rate is higher than [TEX]O(n)[/TEX]. An example for such an algorithm is an algorithm which prints out a two dimensional array: public void print2DArray(int[][] arr2d) { for(int i = 0; i < arr2d.length; ++i) { for(int j = 0; j < arr2d[i].length; ++j) { System.out.println(arr2d[i][j]); } } } We have [TEX]n*n=n^2[/TEX] cells in the 2d array and we iterate over all the cells one time and print them, therefore our algorithm is [TEX]O(n^2)[/TEX]. [TEX]O(\log(N))[/TEX] – First of all, in the computer world [TEX]\log(n) = \log _2(n)[/TEX]. It is better than [TEX]O(n)[/TEX] and is the notation that is the hardest to get a good intuition upon. Let's think about the following algorithm: We have a pile of stones (if you want to stick into the computer world then imagine we have a pile of bits), and each turn we take half of the stones and toss them away. We continue to do that until we are left with 1 stone (for the sake of this algorithm, let's assume that we are starting with an even amount of stones/bits). In the first iteration we will have [TEX]n[/TEX] stones. In the second one, we toss half of the n stones and are left with [TEX]\frac{n}{2}[/TEX] stones. In third one we are left with [TEX]\frac{n}{2*2} = \frac{n}{2^2}[/TEX]. In the third round another half was tossed and we are left with [TEX]\frac{n}{2*2*2}=n/(2^3)[/TEX] stones. In round [TEX]k[/TEX] we will be left then with [TEX]\frac{n}{2^k}[/TEX] stones. After [TEX]x[/TEX] iterations we have reached only one stone and the algorithm had stopped. We want to know how many rounds it took, so we need to solve the equation [TEX]\frac{n}{2^x}=1 \Rightarrow n=2^x \Rightarrow x=log _2(n)[/TEX]. If we would have divided by 3 each iteration using the same calculation we would have gotten [TEX]x=\log _3(n)[/TEX]. [TEX]O(2^n)[/TEX] – exponential time, may God have mercy on our souls, because this thing is fast growing. The algorithm growth will double for each new element in the data input! For [TEX]n=1[/TEX] we will have 2 operations, for [TEX]n=2[/TEX] 4 operations, for [TEX]n=3[/TEX] we have jumped to 8 operations, and [TEX]n=10[/TEX] already 1024 operations. 99% of the times, show your boss an exponential algorithm and you'll have to clear your desk :)

3. Some practice
Now, enough talk, let's analyze! I will give you some brief examples that I've seen people struggle over in the forum.
A simple loop: Let's take this simple loop for our first example:

public void printMessage(int n)
{
   for(int i = 0; i < n; ++i)
   {
      print("woohoo! I am a loop!")
   }
}

For a given [TEX]n[/TEX], the algorithm will print [TEX]n[/TEX] time the message "woohoo! I am a loop1". Give [TEX]n[/TEX] – it will do [TEX]n[/TEX] operations, and by definition our algorithm has [TEX]O(n)[/TEX] complexity. Note that all the operations such as print, simple arithmetic such as addition, multiplication, division and subtraction are always considered to be [TEX]O(1)[/TEX].
Two simple loops: Let's take a nested loop as our next example:

public void printMessage(int n)
{
   for(int i = 0; i < n; ++i)
   {
      print("woohoo! I am the outer loop!")
      for(int j = 0; j < n; ++j)
      {
         print("woohoo! I am the inner loop!")
      }
   }
}

The outer loop goes from 1 to n, total of [TEX]n[/TEX] iterations. In each of the outer loop iterations, when we reach line 3, we enter another loop, also going from 1 to n, total of [TEX]n[/TEX] iterations as well. This means that for every outer iteration, the inner loop is doing [TEX]n[/TEX] iterations. For [TEX]n[/TEX] outer iterations, each one of them the inner loop is doing [TEX]n[/TEX] iterations we have [TEX]n*n=n^2[/TEX] iterations for a given [TEX]n[/TEX], making the complexity of the algorithm [TEX]O(n^2)[/TEX].
Different increments: Let's see another loop:

public void printMessage(int n)
{
   for(int i = 0; i < n; i+=2)
   {
      print("woohoo! I am a loop!")
   }
}

This loop also prints the happy message, but every loop iteration I jumps by two and not by 1, meaning in the first iteration [TEX]i=1[/TEX], the second iteration [TEX]i=3[/TEX], third one [TEX]i=5[/TEX] and so on until [TEX]i=n[/TEX]. i will reach [TEX]n[/TEX] twice as fast now, instead of [TEX]n[/TEX] steps only at [TEX]\frac{n}{2}[/TEX] steps. So, what is the complexity? It is still [TEX]O(n)[/TEX] like when the increment was only by 1!. Let's remember what [TEX]O(n)[/TEX] means – it means that there exists a constant [TEX]M[/TEX] such that the [TEX]algorithm \leq M*n[/TEX]. In our case [TEX]M=\frac{1}{2}[/TEX] and we still get that the algorithm is [TEX]\leq \frac{1}{2}[/TEX], exactly what we want. It doesn't matter if the increments were by 1,2,5 or 20 – a constant does not change the complexity!
Another different increment: So, we have seen that increments by 1, 2, or 200 does not change the complexity, what about this loop:

public void printMessage(int n)
{
   for(int i = 0; i < n; i*=2)
   {
      print("woohoo! I am a loop!")
   }
}

This time i doubles itself each iteration. This is not the same as constant increments- the first iteration [TEX]i=1=2^0[/TEX], the second iteration [TEX]i=2=2^1[/TEX], third iteration [TEX]i=2*2=2^2[/TEX], fourth iteration [TEX]i=2^3[/TEX]. After k iteration [TEX]i=2^{(k-1)}[/TEX]. After [TEX]x[/TEX] iterations [TEX]i=n[/TEX], let's see the value of x: after x iterations [TEX]i=2^{(x-1)}=n \Longrightarrow x=log(n) + 1[/TEX]. Since the "1" is constant time we can throw it away, since it does not affect the overall complexity of the algorithm. We get that the complexity then is [TEX]O(log(n))[/TEX].
4. Credit goes where credit is due
In order to write this introduction I have used A Beginners’ Guide to Big O Notation by Rob Bell and the Wikipedia Entry . A lot of thanks goes to ~s.o.s~ for commenting and helping me to make this intro better.

I hope that this little introduction made the complexity a little bit simpler. If you have any more questions or you have more helpful examples, please comment.

Direwolf007 commented: Great post! +4
Nick Evan commented: Nice one +16
~s.o.s~ commented: Forgot to upvote your effort, d'oh! :-) +18
Ancient Dragon commented: Very good :) +34
\007 commented: well wrote +0
Anuradha Mandal commented: Thank a lot. +0
TrustyTony commented: Great stuff! +13
NathanOliver commented: Great Job +10
mynameisfish.22 commented: Great post helped a lot thanks +0

Recommended Answers

All 11 Replies

A few articles you might be interested in:

A few comments:

  • Your use of pseudo code is kind of confusing; I think using a language like Java, Python etc. might be more clear
  • Try using LaTeX for writing down equations or at least write them in [icode]

    [/icode] tags for clarity. The way it is right now, it seems really cluttered.

E.g. in LaTeX, i=2^x looks like [TEX]i=2^x[/TEX]

I think this can be very helpful to people who are confused about all-them-O-notations, if only they bother to use the search function :D

Excellent post. Since we're receiving a lot of question about Big-O, I've decided to make this post a "sticky" in the CS-forum.
So this means that all future replies to this thread must be on-topic. Do not post your support question in this thread, but start a new thread for them. All off-topic posts/hijacks will be deleted to ensure that this thread stays clean.

Basic info is basic. ;) Since this is now a sticky, I'd like to see the addition of a common class of examples that tend to cause confusion:

for (int i = 0; i < N; i++) {
    for (int j = i; j < N; i++)
        System.out.println("What be the complexity?");
}

It's O(N^2), but the reason isn't immediately obvious because those not comfortable with Big O focus on the variance of the inner loop rather than the upper bound. A thorough analysis would be an amusing aside, if you feel so inclined.

Basic info is basic. ;) Since this is now a sticky, I'd like to see the addition of a common class of examples that tend to cause confusion:

for (int i = 0; i < N; i++) {
    for (int j = i; j < N; j++)
        System.out.println("What be the complexity?");
}

It's O(N^2), but the reason isn't immediately obvious because those not comfortable with Big O focus on the variance of the inner loop rather than the upper bound. A thorough analysis would be an amusing aside, if you feel so inclined.

You probably meant incrementing the j at the second loop :) Let's look at the code:

for (int i = 0; i < N; i++) {
    for (int j = i; j < N; i++)
        System.out.println("What be the complexity?");
}

The outer loop is going from 0 to N-1, meaning N iterations. In the first iteration i=0, the inner loop will iterate from j=i=0 to N, meaning N-1 iterations. In the second outer loop iteration i=1, and the inner loop will iterate from j=i=1 to N, meaning N-2 iterations. For iteration number k we get i=(k-1) and the inner loop will iterate from j=i=(k-1) to N-1, meaning N-(k-1) iterations. Let's sum all the iterations that the inner loop is doing: (N-1) + (N-2) + (N-3) + ... + 2 + 1. Using the sum of an arithmetic progression we can calculate this sum: (N-1) + (N-2) + (N-3) + ... + 2 + 1 = \frac{N(N-1)}{2}=\frac{N^2-N}{2}=\frac{N^2}{2}-\frac{N}{2}. Now, as we saw before, the constant does not count since we can multiply by M, meaning that we get N^2-N=O(N^2-N), and since we are looking at the worst case growth rate, the N^2 grows much faster than O(N), leaving us with O(N^2).

This analysis is dedicated to Narue the Code Princess.

what about O(n logn)? you did not talk about that. how good is O(n log n)? please give an example!

The examples that are mostly used for O(n\cdot logn) are a little more complex, like the Merge Sort. Those examples use things like the Master Theorem, which is outside the scope of this brief introduction. If you want a simple example for an algorithm that runs in O(n\cdot logn) complexity, let's take a look at the following nested loop:

for(int i = 0; i < n; ++i)
{
   System.out.println("woohoo! I am the outer loop!")
   for(int j = 0; j < n; j*=2)
   {
      System.out.println("woohoo! I am the inner loop!");
   } 
}

Following the previous examples, we know that the inner loop runs in O(logn), and the outer loop runs exactly n times, meaning O(n), we get that the complexity of the algorithm is O(n\cdot logn).

O(n\cdot logn) is an important complexity for sorting, since all comparison-based sorting algorithms need at least O(n\cdot logn) comparisons in order to complete the sort for most inputs.

Speaking of worthless contrived examples, here's mine:

An O(n \log n) algorithm:

for (int i = 0, e = int(n * log(n)); i < e; ++i) {
    printf("ho ");
}

How illuminating!

all comparison-based sorting algorithms need at least O(n\log n) comparisons in order to complete the sort for most inputs.

This is not true at all. Comparison-based sorting algorithms need at least \Omega(n\log n) comparisons.

I wanted to add MIT's OCW has a nice lecture series on algorithms. They can be viewed here. The first 2 lectures cover the notation pretty well. It may help a lot to see and hear professors explain it along side a nice reading like the OP provided.

this is my second post for this topic sir may i know what is the diff. b/w Big O and thita (Q) notation ?

f is in Theta(g) if and only if f is in O(g) and g is in O(f).

Hope that helps.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.