Hello,

I'm trying to understand Big O notation, specifically to understand how to calculate it for a given program. I have my lecture notes but I don't exactly get what they're saying.

I understand that there is a solid definition of O(N). For example;

F(N) = O(G(N)) reads that F of N is Big O of G of N

Then my lecture notes bring me a confusing line:

F(N) = O(G(N)) if there exist two constants c and k such that F(N) <= cG(N) for all n >=k

Can someone explain this line to me?

My professor went ahead and supplied a proof-by-example; which I absolutely do not understand. It says:

Suppse F(N) = 2N^2 + 4N + 10
F(N)=O(G(N)) where G(N)=N^2

PROOF:
F(N) = 2N^2 + 4N + 10
F(N) <= 2N^2 + 4N^2 + 10N^2 for N >=1 (Why did he add N^2 to all parts of the function? I know he didn't multiply by N^2 because the first N would be N^4 and the second would be N^3)
F(N) <= 16N^2
F(N) <= 16G(N) Where c = 16 and k = 1

I understand that there are different examples of Big Oh's:
O(1) for constant complexity
O(N) for linear complexity
O(N^2) for quadratic complexity
I understand how to get those three for a given program.

But, for those three:
O(2^N)
O(LOG N)
O(N LOG N)

I don't understand :confused:

Note that when I say for a given program, I mean for a C++ code. My professor went ahead and gave me examples of Big O based on F(N), which I also don't get ><

F1(N) = 10N + 25 N^2 O(N^2)
F2(N) = 20 N LOG N + 5N O(N LOG N)
F3(N) = 12 N LOG N + 0.05 N^2 O(N^2)
F4(N) = N^1/2 + 3N LOG N O(N LOG N)

I'm basically copying my lecture notes here in hopes of understanding what they mean. Please help :'(

Recommended Answers

All 2 Replies

F(N) = O(G(N)) reads that F of N is Big O of G of N

That's not a "solid definition", that's a description of how you read the notation out loud.

F(N) = O(G(N)) if there exist two constants c and k such that F(N) <= cG(N) for all n >=k

Now that's a solid definition. It means what it says: There are two numbers c and k, where it's true that if n >= k, then F(n) <= c*G(n).

Let's see how we might come up with this definition.

Suppose we want to have a few rules to follow whereby we can say that the function "g" is no bigger than the function "f". We could go with the following:

> "g <= f provided that for all real numbers x, |g(x)| <= |f(x)|"

Now, the trouble with that definition, practically speaking, is that most typical functions are then incomparable. There are places where |x^2| is greater than |x| and places where |x^2| is less than |x|. Also, it isn't useful at all to say that the function x -> 3x is any bigger than the function x -> 2x. These two functions are separated by a constant multiple, and that could be an artifact of how the code is written and what our definition of a "single instruction" is -- a constant factor can be corrected by getting hardware that's slightly faster, or a compiler that's slightly better. The important part is that the constant is not affected by the input -- it's a constant multiple no matter whether the input is small or large.

So we could have a new definition: two functions are considered "equal" if they're a constant factor apart. We could say that g "equals" f if there's some value c where g(x)/f(x) = c. Well that definition sucks, because then x and x+1 are considered to be unequal! Ugh. So let's put that definition aside, for now, and try to make a definition for "g <= f".

> "g <= f provided that there's number c where, for all real numbers x, |g(x)/f(x)| <= c."

That's pretty good. Our previous definition, "g <= f provided that for all real numbers x, |g(x)| <= |f(x)|", is the same as our new definition, exception that the previous definition assumes that the value c must be 1.

Now this definition is pretty ineffective. It only works if the function f never hits zero! If f(x) = x, we can't even say that f <= f, because f(0)/f(0) is not defined. And worse, if g(x) = x^2, we can't even hope to say that f <= g, because x / x^2 grows arbitrarily large when you're close to zero.

But it's an otherwise worthwhile start, because it says that there's a horizontal line which the graph of |f(x)/g(x)| cannot cross.

Since we're only interested in determining the behavior of functions as they grow arbitrarily large, we can forgive their indiscretions for small values. So we put a limit on our real numbers by saying that x must be greater than 10.

> "g <= f provided that there's number c where, for all real numbers x where x is greater than 10, |g(x)/f(x)| <= c."

Now this is a pretty effective definition. If f(x) = x^p and g(x) = x^q, we can say that f <= g if p <= q. But why pick 10? That's such an arbitrary number, and we could think of functions (like f(x) = (x-11)^2 versus g(x) = x) where the definition breaks down. So instead of using 10, or 11, or some other number, we say "some number exists":

> "g <= f provided that there's number c, and also some other number k, where, for all real numbers x where x is greater than k, |g(x)/f(x)| <= c."

And we don't really use the notation "g <= f". We say that "g is O(f)". I'm not a fan of the notation "g(x) = O(f(x))", but you see it a lot.

commented: Excellent description +3
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.