Thank you for a quick reply. I have a long program with me. I wonder it will take many days to calculate its complexity.

any tutorial or website to help me...i want to get the time complexity of this algo using summation and big o...i really find it difficult to understan...can you help me please solve step by step

add=0;
for (a=1 to n) do
for b=1 to a*a do
for c=1 to b do
add=add+c;

anyone please help

Yes, there are tutorials and websites to help you. Try reading the rest of this thread...

I think in this case, addition is based on for loops.
And there are 3 loops in your given algo.
An exmaple of three loops is also present on this thread or of a similar kind of thread.

I suggest you read them first and you will be able to find a solution by yourself.

Yes, there are tutorials and websites to help you. Try reading the rest of this thread...

i cant find any website can you please direct me?

I think in this case, addition is based on for loops.
And there are 3 loops in your given algo.
An exmaple of three loops is also present on this thread or of a similar kind of thread.

I suggest you read them first and you will be able to find a solution by yourself.

hello yes i read it but it's quite vague can you help me how to do it step by step? i've been searching the web but can't find site that will help a dummy like me

any tutorial or website to help me...i want to get the time complexity of this algo using summation and big o...i really find it difficult to understan...can you help me please solve step by step

add=0;
for (a=1 to n) do
for b=1 to a*a do
for c=1 to b do
add=add+c;

anyone please help

Looks like n^5 to me. The outermost loop executes n times, the next-inner one on the order of a^2, which in the worst case is n^2, and the inner-most to that same n^2 worst-case.

Multiply these out to find the minimum number of executions and they will be on the order of n^5.

Verifying if anyone could give me a hand solving this excersices:

Instructions:
Find the Big Oh for the times which is excecuted x : = x + 1. You should justify your answer:

I : = N
WHILE I >= 1 DO
BEGIN
FOR J : = 1 TO N DO
X : = X + 1
I = [I/2]
END

The outer loop executes log n times. The inner one, n times per each outer loop iteration. Multiplication gives the total # of iterations, n log n.

Ya , I think the solution to the given program is correct. The outer loop divids the sampling increment by 2. Hence it is log N and the inner loop is straight forward. Its N.
so solution is O(NlogN)

Can someone help on this one please?

If I have this:

public static boolean find(int [] a, int k) {
  for (i = 0; i < a.length; i++) {
    for (j = 0; j < a.length; j++) {
      if ((a[i] + a[j]) == k) {
        return true;
      }
    }
  }
  return false;
}

Would the time complexity of this be 2n^2 therefore O(n^2)?

Am I counting it correctly? An if statement condition within the second FOR loop is run n times and the second FOR loop is run n times, therefore 2n and then the first FOR loop runs n times, therefore (2n * n) operations?

Please Help me
how to Know beginning to this branch
please HELP!!!!!!!!!!!


Books , WebSites ,Tutorial

Hey
Thanks for the clear explanation. I have a question here whose answer will clarify many things to me.

You say the complexity of this program is O(n). Can I say that it is Theta(n). I dont see the clear distinction when I need to use O, Omega or Theta.

I understand that given two functions O represents one function (actually O is a set with many functions) less than or equal to, while Omega represents greater than or equal to and Theta represents equal. I also have the information that worst case is denoted by O and best case by Omega. I hope you understand my question.

To reiterate it when will I use O, Omega and Theta when it comes to calculating complexity. I know how to come with the "expression". I need to know which symbol to prefix it with.

Regards
Dheeraj

Count the total number of fundamental operations the algorithm performs, relative to some value N. For example, this function..

int factorial(int n) {
    int ret = 1;
    while (n != 0) {
        ret *= n;
        -- n;
    }
    return ret;
}

The function sets the value of ret to 1. This is one operation.

Then this function tests the condition of the while loop exactly 1 + n times. It runs the body of the while loop exactly n times. The body of the while loop performs exactly 2 fundamental operations: 'multiply ret by n', and 'decrement n'. The condition has one fundamental operation, comparison. Calling the function counts as a fundamental operation, too.

So we have the total number of fundamental operations being
1 + 1 + (1 + n) + 2 * n. (The sum is composed of the operations for calling the function, setting ret = 1, performing a total of 1 + n tests in the condition of the while loop, and a total of 2 * n operations in the body of the while loop.)

This simplifies to 3 + 3 * n fundamental operations, which is O(n).

(What do I mean by 'fundamental operation'? Any operation that takes an amount of time that is bounded by a constant.)

>You say the complexity of this program is O(n). Can I say that it is Theta(n).
Yes. If you calculate the O and Omega and they turn out to be the same, you have a tight bound and can use Theta.

>I dont see the clear distinction when I need to use O, Omega or Theta.
You use O when you want to say "The complexity won't be any greater". You use Omega when you want to say "The complexity won't be any less". And you use Theta when you want to say "The complexity will always be the same".

Hi
Thanks for the reply. So when you say calculate Theta and O do you mean that based on the definition of theta and O we need to find constants which satisfies the conditions.

But for an algorithm how is this done. I think we can calculate just the running time right. How can we calculate two different functions for an algorithm. I am not clear on that.
Say for example that an algorithm runs in 3*n*n +1 steps. From this how do you conclude it is O or theta.

This is my confusion.

Regards
Dheeraj

>How can we calculate two different functions for an algorithm. I am not clear on that.
Typically the best, average, and worst cases for an algorithm are calculated differently. So you could have three different functions based on those.

what if

MyTable1.First;
  for i := 1 to MyTable.RecordCount do
  begin
  ...
  end;

is O(n) or O(n2) because MyTable.RecordCount is also counts

Dear All ,
Kindly Tell me what is the relation between UML and OOAD and what is the difference between them.
Thanks and Regrads,
brijendra Mishra

UML has a time complexity of O(n^n) while OOAD has a time complexity of O(n!).

How could you say that UML has time complexity of O(n2) and OOAD (n!).What is the relation? I dont know much on this topic,but what kind of answer is this.
If you are not in a mood to give correct answer then why bother giving at all.
I am not as knowledgeable as you are but ,it is no reason why you should make fun of someone's query.

Sorry, I assumed that since you were posting a software design question in a thread about time complexity, it was some kind of joke, so I played along.

hi, im trying to do the dominant behaviour, which will compute the total run time, but im not sure if im on the right track. heres what i have:

Function mystery(n)
r = 0
for i = 1 to n -1 do
for j = i + 1 to n do
for k = 1 to j do
r = r + 1
return r
END

first thought
the function sets the value of r to 1 (one operation). then it tests the second loop to loop 1 to n- 1 times...(it runs the body of the for loop exactly n - 1 times. the body performs two other for loops; ' j, which is i + 1 to n ' and ' k which runs 1 to j times.'

other thought
the outer most loop executes n-1 times (worst case?). the next innermost one on the order exectures i + n-1 times (loops based on first for loop?).
the addition is based on two for loops....


any suggestions, id appreciate it...

UML has a time complexity of O(n^n) while OOAD has a time complexity of O(n!).

couldnt have said it better ...

need explanation on finding order of growth of a recusive function example what is the order of growth given that T(1)=1 and T(n)=2T(n/4) + 1 for n>0
what are the steps to find ORDER OF GROWTH OF ANY function.

steps or methods of order og growth of a function

For any function?

Here's a tip: For any function f, f(n) = O(f(n)). It's true! He he he.

Now, given T(1) = 1 and T(n) = 2T(n/4) + 1, you can find the order of growth using the Master theorem.

One way, though, to think about the above problem intuitively, is to ask what the statement "T(n) = 2T(n/4)+1" really means. Another way of writing it is to say, "T(4n) = 2T(n) + 1". That is, every time you multiply n by four, the value of T(n) multiplies by 2. (For large values of n, adding 1 has a negligible effect.)

What function has that property, where multiplying the input by four results in the output being multiplied by 2? We know the function is sublinear, because if it were linear, we would find that T(4n) is approximately 4T(n), for large n. But it's only 2T(n)

We know it's bigger than logarithmic, because we know that for logarithmic functions, multiplying a constant by the input results in a mere constant being added to the output.

So, what function has the property that T(4n) is 2T(n)? Make some guesses. And this will be the function that conveniently represents your order of growth.

some one help me please...
how will i compute the time complexity of these algo...


do{
h=h-1
}while(h!=n);

Very carefully, I hope! Why don't you start by learning how to compute time complexity for algorithms, in general.

Hi Nub
Time complexity has cases :::
1: Best case
2: Average case
3: Worst case

Lets take ur example Quick Sort

It depends upon the nature of the input you are giving to the algorithm...

if u give a sorted list [ 1 2 3 4 5 6 7 ] I will take O(n square ) comparasions

If u provide a random list it will be n log (n) and we know N^2 is bad than n logn ,,
so that becomes the worst case..

To compute the average case follow the simple approach to find the average of
n numbers summation of all n numbers/N

Plz tell me if it helps or i should give some better way

Sum()
Integer x,y,z
Read x,y
z=x+y
print “the sum of x & y is”:z
end sum()

commented: This was just as meaningless as the same thing you bumped on another thread -7

Thanks for the explanation of quick sort,
can you explain bubble sort and merge sort ??

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.