insertion into a set takes O(log n) time, which means ?

I have heard about big O notation which desribes how memory usage changes when input changes.

But log n ? n is probably the number for input values but i don't get it

let's say i have 25 inputs and log 25 with base what is that ? probably base 10, but could some one please please give me simple example and explain how this thing works?

thanks

Checking the entire vector will take O(n) time.

What that means guys?

Checking the entire vector will take O(n) time.

What that means guys?

The base is base 2 when you refer to logn or nlogn. Thus if n = 16, O(logn) = 4 since 2^4 = 16. Searching a balanced binary tree often takes O(logn) time since the depth of a tree containing n elements will be logn.

If n = 16 and the algorithm time is O(nlogn), that would be 16 * log16 = 16 * 4 = 64. Many good sorts take nlogn time. Traversing a linked list or vector will often take O(n) time since you have to go through n elements, where n is the length of the linked list or vector.

Order notation is just like a simplified mathematical expression. O(n), or order-n, is like saying that the runtime is "linear", or roughly proportional to n (which could be the length of a list, the depth of a tree, the number of characters in an input, etc).

We use order notation to give an idea of how efficient we are being with our code...If we wanted to be very precise, O(n) comes from a mathematical analysis of your program. For example, if you had the program:

#define n <whatever number>

int ctr = 0;
while (ctr < n);
{
     ctr++;
}

we could analyze the code as follows:

We do some constant amount of work to declare the counter, say k (this is just some constant representing the amount of work in the preprocessing). Each pass through the loop, we do another constant amount of work, a (i.e. incrementing ctr). Clearly, we loop n times, and we do a work each time, so the loop gives us a*n work. Thus the program does a total amount of work equal to "a*n + k". We see that this resembles a linear function of one variable. As such, we can bound the running time of the program above by some other linear function, say m*n (especially for large n). Thus, we say that the program is order-n, or O(n), because the amount of "work" done is bounded above by some linear function of n.

You can analyze programs and be very precise (mathematically), however the point of order notation is to give an upper bound, without all the math (which is more difficult to interpret, and less useful for most purposes). Thus, you will likely be asked to do informal analysis (as I did for the above program), rather than deriving actual mathematical formulas (using induction and such)...

A trick for informal order notation analysis: when you have nested loops or recursive definition, always start from the inside and work your way out!

hey can someone explain O(logn) and not O(n)

>hey can someone explain O(logn) and not O(n)
Clicky.

In general, saying that an algorithm runs in O(f(n)) time means that there exists a constant k such that if you make n large enough, the run time for input of length n will never exceed k*f(n). The time units in which f(n) are expressed doesn't matter, as you can always compensate for a change in unit by changing the value of k. So, for example, if you change the unit from minutes to seconds, multiplying k by 60 will compensate for that change.

To understand the meaning of O(log n), you need only substitute log n for f(n). The base of the log doesn't matter, because you can compensate by the choice of base by scaling k appropriately.

In both examples (O(f(n)) and O(log n)), the value of k is not important; what is important is that it exists.

So, when I say that inserting a new element in an n-element data structure takes O(log n), I am saying the follogwing:

1) Pick a base for the log.
2) Pick a time unit.
3) Once you have done that, I can find a value of k such that for sufficiently large n, inserting an element in an n-element data structure takes no longer than k*log n, where time is measured in the units chosen in (1) and the log is in the base chosen in (2).

see O(log n) is the algorithm in whiich for every growth of n(the input size) the problem size gets reduced to log n
the base of log is 2 and not 10
such algorithms are those in which the problem is divided into two halves and one of the two is selected. Example: binary search
you can take details from the google and wikipedia or encarta

see O(log n) is the algorithm in whiich for every growth of n(the input size) the problem size gets reduced to log n
the base of log is 2 and not 10

If you think the base of the logarithm matters in O(log n), you do not understand big-O notation.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.