Sorry if this sounds silly but i am having a hard time understanding what "n" is when talking about big-oh. I know "n" is the data size but what does it really mean. Like in:

public static boolean isArrayOver100(String[] args) {
2 if (args.length > 100)
3 return true;
4 return false;
5 }

What is "n" there??

3
Contributors
2
Replies
3
Views
8 Years
Discussion Span
Last Post by BestJewSinceJC

It's funny -- I had the same confusion a little while ago. I am fairly sure this is how it works --> algorithms have different runtimes based on their designs, and each of those runtimes can be roughly estimated depending on the algorithm type. Each of these estimations can be denoted by "Big Oh" notation:

O(1) means that the algorithm is constant. This means that the size of the data it is dealing with does not affect its speed. For instance, if you need to determine whether a string's first character is an 'A', the speed will be the same for any string size -- it doesn't matter whether the string is 2 characters or 5,000 characters long -- it still will be able to execute this command at the same speed.

O(n) means that the algorithm is linear. This means that the speed will vary directly according to the size of the data. For instance, if you are using a for loop to run through a data set, you are going to have a linear speed, because the set size directly affects the speed (there will be a difference between the case of a 2-element array and a 5000-element array --> the latter will involve 4998 more iterations).

O(n^2) means that the algorithm is quadratic. Usually this means that the algorithm is iterating through the loop using an outer and inner loop to accomplish a goal. In this case, the speed will be n x n, or n^2. For instance, if you wanted to print a box of asterisks on the screen based on a user input -- if the user enters an 8 then an 8x8 box will need to be printed. This will involve 2 nested for-loops that each run from 1-8. If the speed of one for loop is O(n), then nested together they will be O(n^2). Alternatively, it can also mean that a quadratic operation is occurring other than nested loops which is still causing the speed to depend on the size of the data twice over, or n x n (n^2).

O(n^3)
means that the algorithm is cubic. This follows the same idea as the O(n^2), but in this case there is likely 3 nested loops or a cubic operation is occurring that is causing a speed of n x n x n.

O(log n) means that the algorithm is logarithmic. Logarithmic algorithms are slower than constant ones, but they are more efficient than O(n) ones. These algorithms tend to split down the data set as they work. Take a binary search, for instance. If you have the numbers 1-100, it guesses 50 first, then checks high / low, and guesses 25 or 75 accordingly. It continues to half the data until the search is concluded. In general, algorithms that are able to break down a set so that the goal can be achieved without having to hunt through one element at a time are O(log n) speed.

O(n log n)
is another notation for algorithms that do hunt through the set one element at a time but are more efficient afterward so they don't have to hunt again. O(n log n) algorithms are thus slower than O(n) ones but faster than O(n^2) ones. They come up more in advanced java, like heapsort and quicksort.

So now, back to your question:

``````public static boolean isArrayOver100(String[] args) {
if (args.length > 100)
return true;
return false;
}``````

This code is constant -- you are not ever traversing through the terms in the array. Regardless of the array's size, your code will work the same. Thus, it is O(1).

Any Questions?

Edited by kvass: n/a

O(n) means that the algorithm is linear. .

O(n^2) means that the algorithm is quadratic . .

O(n^3) [/B]means that the algorithm is cubic. .

To follow up on that, the essential characteristic to remember is that if an algorithm is "linear", that means that if an algorithm takes 500 steps for an input of size 25, for an input of size 50, the same algorithm is going to take in the order of 1000 steps. I.e. the algorithm increases linearly with the input size. The same concept applies for the other things kvass described.

Also, I have qualms with the way kvass described O(nlgn) algorithms, but I don't think he was necessarily wrong; you just need to research them yourself in textbooks to gain a sufficient understanding. And you need to look at recursion trees.

Edited by BestJewSinceJC: n/a

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.