Thanks for the explanation of quick sort,
can you explain bubble sort and merge sort ??

Bubble sort takes O(n*n) time and merge sort takes O(n*logn) time.

Actually, merge sort takes O(n*(log n)^2) time.

how will i write a method for a decryption/encryption software in java.jdk6.0 a simple method

sum=0;
for(i=0;i<n;i++)
for(j=0;j<i;j++)
sum++;
what is the actual running time of this code??

sum=0;
for(i=0;i<n;i++)
for(j=0;j<i;j++)
sum++;
what is the actual running time of this code??

Not positive, but it's probably theta (n * n!) = theta (n^2).
The reason I'd say that is because the outer loop is O(n) and the inner loop is always one less than whatever the value is of the outer loop, so it is something like n (n-1) (n-2) ... 1 = n!

commented: nice! +6

Haha.

Wow, this thread was aweome. Better than the instruction I've received from classes.

Anyways I was wondering if someone could see if I did this correctly. For the method:

private void expandStack () {
       int[] tempArray = new int[(StackContents.length * 2)]; // count as 1 instrcution
       for (int i = 0; i < StackContents.length; i++) { 
           tempArray[i] = StackContents[i]; // count as 1 instruction
       }
       StackContents = tempArray; // count as 1 instruction
   }

So the cost of the algorithm can be expressed as theta(2 + n) since the loop is not nested, it should be a multiple of n instructions depending on the stack length. Since we ignore constants, I can just say worst case cost:

(2 + n) is theta(n)

And to prove this, I can pick a lower bounds of 1 (cL) and upper bounds of 5(cU), where n is let's say 3.

1(2+3) <= 2+3 <= 5(2+3)

If this is right, then I'm starting to understand this notation.

And to prove this, I can pick a lower bounds of 1 (cL) and upper bounds of 5(cU), where n is let's say 3.

1(2+3) <= 2+3 <= 5(2+3)

If this is right, then I'm starting to understand this notation.

Your answering is right, but your "proof" is quite nonsensical. You're just saying that 1*5 <= 5 <= 5*5? What does that even say? You have to show that the inequality holds for _all_ n greater than some base value, not for some particular value of n. And your inequality is nonsense, you just plugged n=3 into 1*(2+n) <= (2+n) <= 5*(2+n) which is a true but irrelevant fact. Maybe you meant to plug n=3 into 1*n <= (2+n) <= 5*n. Alas, you'd still need to consider the case where n=4, and the case where n=5, and so on, but you can't do that, because you'd get bored after checking the case n=362 and you'd still have infinitely many cases to check. So you might say something like this instead:

"And to prove that n + 2 is in Theta(n), we note that 1n <= n + 2 <= 2n, when n >= 3."

Maybe it's necessary to prove the inequality. I'd say it's obviously true, but you might want to state that the inequality is satisfied when n = 3 and then that the slopes of each part (1, 1, and 2) also satisfies the similar inequality (1 <= 1 <= 2), which means the inequality n <= n+2 <= 2n will be satisfied for greater values of n. Another way to prove the inequality would be to say that n <= n + 2 is obviously true (subtract n from both sides) and n + 2 <= 2n is true (subtract n from both sides to get 2 <= n, which is satisfied because we're only considering values of n greater than or equal to 3).

And of course maybe your teacher wants you to be more explicit about the definition of theta notation by saying that you're picking constants 1 and 2 for the lower and upper bounds and that we'll call them cL and cU, but that's just because you're in an introductory class. (If this weren't an introductory class, after all, we'd say that n + 2 is "obviously" in Theta(n) and that's that; because "everybody" knows that a polynomial of degree k is in Theta(n^k).)

Okay, let me try again with this problem:

Prove that n^2 - 2n + 1 is Theta(n^2)

First what theta notation says is that there exists constants, C1(C subscript L) and C2 (C subscript U) and n0 (n subscript 0) such that:

0 <= c1(g(n)) <= f(n) <= c2 (g(n)), for all n > n0

With the definition, I can now apply it and say:

c1(n^2) <= n^2 - 2n + 1 <= c2(n^2)

divide by n^2 which leaves me

c1 <= 1 - 2/n + 1/n^2 <= c2

Now that I have that, since I know there exists n0, c1, and c2, I can test with numbers. I can choose:

n=4, which makes the f(n) = 9/16

Now, I can choose lower bounds 1/2, and upper bounds 1.

Which means:

1/2 <= 1 - 2/n + 1/n^2 <= 3/4, when n = 4, n0 = 4

Is this better?

Also, one thing me and my clasmates don't understand is what n0(pronounce n not) really represents. Does it really have to be noted that it exists?

No, you're doing the proof backwards! You don't "know" that c1, c2, and n0 exist such that c1 <= 1 - 2/n + 1/n^2 <= c2 forall n >= n0 unless you prove it! You have to _show_ that values exist, that could be used for c1, c2, and n0, such that that statement is true. And this is easy to do: pick values fo c1, c2, and n0, and then show that the inequality is true whenever n > n0.

The fact that the inequality is true when n = 4 is completely irrelevent, because it does not mean that the inequality is true when n = 5, or n = 6, and so on. You have to show the inequality's correct with algebraic manipulation (or perhaps some other line of reasoning).

Also, one thing me and my clasmates don't understand is what n0(pronounce n not) really represents. Does it really have to be noted that it exists?

When using big O notation and its relatives, we're only interested in the relationship between functions as the values grow larger and larger. We don't care about the behavior of the functions when they're very small. So, when using theta notation, we'd like to say that the functions remain within constant multiples of one another. But suppose we were comparing the functions n and n - 2. Is n - 2 in theta(n)? Well, you'd note that when n = 2, the ratio |n-2|/n is zero. So you might say "well gee, the ratio of these functions isn't bounded betwoon two positive ratios." But we don't care -- we care about the behavior as n grows larger and larger. So we can say "we don't care about low values." You might then make the definition as follows: f is in Theta(g) if we can pick some boundaries c1 and c2 such that the ratio f(n)/g(n) is between c1 and c2 for any value of n greater than 10. That would let us say that the function f(n) = n-2 is in Theta(n). And it would let us say the function f(n) = n - 9.9 is in Theta(n). But n-10.1 would not be! Why should we care about the values of the functions where n is between 10 and 11? Why should we care about the values of the functions where n is between 11 and 12, or between 100000 and 100005? We don't -- we only care about the behavior in the long run, when n gets larger and larger. So we say, "You can choose the lower bound which you care about." If you want it to be 10, fine. If you want it to be 10000000, fine. It doesn't matter whether you're looking at the ratios of the functions on the interval [10, infinity) or [500000, infinity) if you're interested in the behavior of the functions as they grow to infinity.

And so you get to pick three constants: c1, c2, and n0, and show that the ratio of your two functions is between c1 and c2 when their inputs are greater than n0.

By the way, it's spelled "n naught". Or just "n zero".

ما هو معنى تقدير برنامج كمبيوتر

I quite agree. Very strongly worded.

how to estimating the effective computer program

Read this entire thread. Since that is obviously the topic. If you want examples of algorithms and their analysis, there are plenty on the internet. Google.

how would you express a linear and quadratic running time with big O notation

Seriously? I mean I'm pretty bad at algorithm analysis, I'm not going to lie. . but read some stuff: http://en.wikipedia.org/wiki/Quadratic_function

I'm assuming you know what "linear" means. There really isn't a way to answer your question without just giving you the answer, so that's where I'll leave it.

how to calualte time

How to calculate time complexity of any algorithm or program ....
need help on this..

which tool is using to calculate for time complexity

commented: The under-used pudding between your ears perhaps? -3

which tool is using to calculate for time complexity

which tool is using to calculate for time complexity

which type of tool is using to calculate for time complexity?
Where did i Find ? help me ? what is difference between matlab and mathlab

which type of tool is using to calculate for time complexity?

your brain

thnx for explaining it in such a simple way...i understood it in 5 minutes..

Hi Friends,

I am doing Msc-IT from KSOU.
I Have exams on 28th iam not understanding Complexity concept and Big O notation(Best, average and worst case) need help from you all.

Manju

oops.............

@ Rashakil Fol and @ Narue : you both clearly have super strong fundamentals !! thanks a lot for giving such a beautiful explanation of complexity that i could have never found out in any algo book !! Looking forward to more brainstorming sessions !! thnx a lot !! best wishes !!

Is O(1)run time complexity is low? and wat exactly it mean?

I have big problem with Alogorithm can someone help me............

Question..Find the Thetha Notation for the following.

b). t(n) = (n + 1 )(n + 3)/(n + 2)

Theta means a "tight" bound in the sense that you bound something above and below by a growth order.

Basically, if you can find a growth order such that t(n) is in big-Oh of the order, as well as in big-Omega of the order, then t(n) is by definition in Theta of that order.

Hint: Let n -> infinity, as though you were taking a limit. What does t(n) look like in the limit of large n?

Hint 2: If you want to go beyond that, take your answer from the first hint and prove it is correct by finding constants a, b, and m such that a*f(n) <= t(n) <= b*f(n) for all n >= m. You don't have to find tight a and b... just make them positive and you're good to go.

a:=0
for i:=4 to n then
{
a=i*i;
}
return(a)


what is the freguency count of this algo ??
explain plz...,

a:=0
for i:=4 to n then
{
a=i*i;
}
return(a)


what is the freguency count of this algo ??
explain plz...,

Can you describe an algorithm that is O(nm)?

I'm guessing it would be something like traversing a two-dimensional array. The two loops needed to do that is throwing me off however. It's making me think that it could be O(n^2).

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.