Doing some revision for a university exam being sat at the end of the week. Got my head around most of the BigO stuff I think apart from calculating the BigO equations like the below.

T(n) = 1000040
T(n) = 45n^2 + 3n/1000 + 18(log n)

Could anyone explain the logic behind how would come to an answer for these?

Recommended Answers

All 9 Replies

Start by looking at the definition of the order notation you're using (say, "Big O"). Then see if you can satisfy the definition with the given equation. Theres not much else to say about it.

For example:
f(n) ∈ O(g(n)) iff there exists positive real numbers k and n0 such that f(n) ≤ k|g(n)| for n0 ≤ n.

So, if f(n) = 1234 we can see that 1234 ≤ 1235|1| for 1 ≤ n, thus f(n) ∈ O(1).

The example made no sense to me :(

That example doesnt make much sense to me either but perhaps this will give you more look into the big O notation, it is part of a book "Data structures and algorithms in Java"

You might think that in comparing algorithms you would say things like “Algorithm A is twice as fast as algorithm B,” but in fact this sort of statement isn’t too meaning- ful. Why not? Because the proportion can change radically as the number of items changes. Perhaps you increase the number of items by 50%, and now A is three times as fast as B. Or you have half as many items, and A and B are now equal. What you need is a comparison that tells how an algorithm’s speed is related to the number of items




Insertion in an Unordered Array: Constant Insertion into an unordered array is the only algorithm we’ve seen that doesn’t depend on how many items are in the array. The new item is always placed in the next available position, at a[nElems], and nElems is then incremented. Insertion requires the same amount of time no matter how big N—the number of items in the array—is. We can say that the time, T, to insert an item into an unsorted array is a constant K: T = K In a real situation, the actual time (in microseconds or whatever) required by the insertion is related to the speed of the microprocessor, how efficiently the compiler has generated the program code, and other factors. The constant K in the preceding equation is used to account for all such factors. To find out what K is in a real situa- tion, you need to measure how long an insertion took. (Software exists for this very purpose.) K would then be equal to that time.




Linear Search: Proportional to N We’ve seen that, in a linear search of items in an array, the number of comparisons that must be made to find a specified item is, on the average, half of the total number of items. Thus, if N is the total number of items, the search time T is propor- tional to half of N: T = K * N / 2
As with insertions, discovering the value of K in this equation would require timing a search for some (probably large) value of N and then using the resulting value of T to calculate K. When you know K, you can calculate T for any other value of N. For a handier formula, we could lump the 2 into the K. Our new K is equal to the old K divided by 2. Now we have T = K * N This equation says that average linear search times are proportional to the size of the array. If an array is twice as big, searching it will take twice as long.




Binary Search: Proportional to log(N) Similarly, we can concoct a formula relating T and N for a binary search: T = K * log2(N) As we saw earlier, the time is proportional to the base 2 logarithm of N. Actually, because any logarithm is related to any other logarithm by a constant (3.322 to go from base 2 to base 10), we can lump this constant into K as well. Then we don’t need to specify the base: T = K * log(N)




Don’t Need the Constant Big O notation looks like the formulas just described, but it dispenses with the constant K. When comparing algorithms, you don’t really care about the particular microprocessor chip or compiler; all you want to compare is how T changes for different values of N, not what the actual numbers are. Therefore, the constant isn’t needed. Big O notation uses the uppercase letter O, which you can think of as meaning “order of.” In Big O notation, we would say that a linear search takes O(N) time, and a binary search takes O(log N) time. Insertion into an unordered array takes O(1), or constant time. (That’s the numeral 1 in the parentheses.) 






Figure 2.9 graphs some Big O relationships between time and number of items. Based on this graph, we might rate the various Big O values (very subjectively) like this: O(1) is excellent, O(log N) is good, O(N) is fair, and O(N2) is poor. O(N2) occurs in the bubble sort and also in certain graph algorithms that we’ll look at later in this book. 


The idea in Big O notation isn’t to give actual figures for running times but to convey how the running times are affected by the number of items. This is the most meaningful way to compare algorithms, except perhaps actually measuring running times in a real installation.

I hope this helps, the book is really good by the way for those who are planning to buy it, don't hesitate(My opinion, of course)

Be careful about the quote Slavi: Order notation doesn't neccasarily have to work with algorithms. For example, it's often seen to represent "some" error in sums often appearing in Calculus. Or it's seen in problems like the one presented. Order notation's involvement with algorithms is the same as +'s involvement in chemistry.

If your a programmer, then sure. You can memorise the complexity of algorithms and that will usually surfice. But with problems in CS or problems like the one OP asked, you'll need to use the definitions of order notation.

The example made no sense to me

Well, which part didn't make sence?

You have the definition: "f(n) ∈ O(g(n)) iff there exists positive real numbers k and n0 such that f(n) ≤ k|g(n)| for n0 ≤ n." And you have the proof that the definition is satisfied.

Here's some notes one of my classmates wrote if you want a full explination. Chapter 2 begins to talk about order notation.

Didn't make sense probably because I haven't covered such things in lectures.

The exam question wanted the Time Complexity of the above formulas, guessing its a lot simpler than proving they are right and just identifying that they could be linear time for example or something?

At which point I always thought you just found the worse bit of the Algorithm, in the case of:

T(n) = 10000^40 (??)
T(n) = 45n^2 + 3n/1000 + 18(log n) (Quadratic?)

You can't find the time complexity of formulas. As I've said, order notation doesn't imply algorithms. Unless of course the formula is an algorithm, in which case the worse case time complexity is always O(1) (worst case THETA(1) for that matter). But that's not the case here.

However, you can take a formula and write it's order notation, which is what the question is almost definately asking.

guessing its a lot simpler than proving they are right and just identifying that they could be linear time for example or something?

In this case you can figure it out at a glance (don't "guess" though). However, you will only be able to do this with very simple T(n). Any non-trivial T(n) and you will need to use the definition. Also, knowing the definition makes it very easy to figure it out at a glance.

Your school might be doing something strange and only asking you to be able to do trivial T(n) without understanding what order notation actually is. In that case they shouldn't be asking questions like this, since trivial T(n) will never be helpful to you.

In any case, here's how you can solve the questions without understanding what your doing (if you school really insists):

(1) Find a function g(n) that should "eventually" become smaller then T(n).

(2) You can take off any constants, and you only need to worry about the largest term in your function. What your left with is the answer.

For example:

1) T(n) = 5 then we choose g(n) = 6. This "eventually" becomes bigger then T(n) when n is large enough (all n is large enough in this case). We multiply off the constant (6 in this case), and we get the final answer: (T(n) = 5) ∈ O(1) or "constant time" if T was the number of operations.

2) T(n) = 3n + 5 then we choose g(n) = 4n. g(n) become bigger when n > 5. We multiply off the constant, and we get the answer: (T(n) = 3n + 5) ∈ O(n) or "linear time" if T was the number of operations.

T(n) = 10000^40 (??)

I've done 2 examples of this already. One in my first post, and one in this post.

T(n) = 45n^2 + 3n/1000 + 18(log n) (Quadratic?)

As you can see, if you choose g(n) = 100n^2, g(n) quickly beats T(n). Multiply off the constant, and what are you left with?

EDIT:
(1) Find a function g(n) that should "eventually" become larger then T(n).

Asked a colleague and fellow student yesterday what the solution to such questions would be, apparently we are just expected to identify the least efficient part of the formula given based off the ordering of BigO.

For example:

T(n) = 45n^2 + 3n/1000 + 18(log n) would be O(N^2)

Quite simple it seems compared to the solutions offered above, I am very grateful that is the case!

Quite simple it seems compared to the solutions offered above

Yes. The cost is that you're limited in the T(n) you'll be able to solve for (for example, what if no part of the formula exists in BigO ordering?), and you loose understanding of order notation. Whatever give you marks I suppose.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.