Hello,

I'm trying to understand Big O notation, specifically to understand how to calculate it for a given program. I have my lecture notes but I don't exactly get what they're saying.

I understand that there is a solid definition of O(N). For example;

`F(N) = O(G(N)) reads that F of N is Big O of G of N`

Then my lecture notes bring me a confusing line:

F(N) = O(G(N)) if there exist two constants c and k such that F(N) <= cG(N) for all n >=k

Can someone explain this line to me?

My professor went ahead and supplied a proof-by-example; which I absolutely do not understand. It says:

Suppse F(N) = 2N^2 + 4N + 10

F(N)=O(G(N)) where G(N)=N^2PROOF:

F(N) = 2N^2 + 4N + 10

F(N) <= 2N^2 + 4N^2 + 10N^2 for N >=1 (Why did he add N^2 to all parts of the function? I know he didn't multiply by N^2 because the first N would be N^4 and the second would be N^3)

F(N) <= 16N^2

F(N) <= 16G(N) Where c = 16 and k = 1

I understand that there are different examples of Big Oh's:

O(1) for constant complexity

O(N) for linear complexity

O(N^2) for quadratic complexity

I understand how to get those three for a given program.

But, for those three:

O(2^N)

O(LOG N)

O(N LOG N)

I don't understand :confused:

Note that when I say for a given program, I mean for a C++ code. My professor went ahead and gave me examples of Big O based on F(N), which I also don't get ><

F1(N) = 10N + 25 N^2 O(N^2)

F2(N) = 20 N LOG N + 5N O(N LOG N)

F3(N) = 12 N LOG N + 0.05 N^2 O(N^2)

F4(N) = N^1/2 + 3N LOG N O(N LOG N)

I'm basically copying my lecture notes here in hopes of understanding what they mean. Please help :'(