I'm so confused on how to calculate the running time of an algorithm. i've searched online but i can't seem to understand any of the explanations hence my question: how do you calculate the running time of any algorithm? Please i'd love an explanation with any example or you can use the one given below. Thank you. Any help at all is very much appreciated..

The Max Average problem is defined as follows:
Input: array A[1..n] containing signed integers A[1]...A[n]
Output: max Aver(i,j) fulfilling 1 ≤ i ≤ j ≤ n where Aver(i,j) = sum(i, j)/(j-i+1) is an average and a function sum(i, j) is a sum of all values from index i to index j: A + ... +A[j].

For example, if A[1..8] = {1,-2,5,-1,-3,3,-2,7} then max average is 9/6 for i=3 and j=8. That is, it does not exists any other two indexes i, j such that 1 ≤ i ≤ j ≤ 8 and Aver(i,j) > 9/6.

``````(sum[j] holds max of all sums ending in A[j])
maxsum := A[1]
create array sum[n] of n size
sum[1] = A[1] // set first maxsum
for(j := 2; j ≤ n; j++) // iterate over j
sum[j] := max(sum[j-1] + A[j], A[j]) // calculate max sum(i,j) for j and all i such that i ≤ j if(sum[j] > maxsum) // remember max
then maxsum := sum[j]; maxAver:=maxsum/(j-i+1)
return maxAver``````

thank you..

3
Contributors
2
Replies
4
Views
6 Years
Discussion Span
Last Post by JeffGrigg

I'm so confused on how to calculate the running time of an algorithm. i've searched online but i can't seem to understand any of the explanations hence my question: how do you calculate the running time of any algorithm?

You count how many primitive operations it takes (as a function of the size of the input). Then you convert that formula to O notation, since the actual formula depends on your notion of what a "primitive operation" is.

If the number of primitive operations that happen depends on the actual value of the input, not just the size, then you calculute some upper bound on the number of primitive operations your function could take. Then you convert that formula to O natation.

Edit: A "primitive operation" has to be one that takes a bounded amount of time. That is, there is some value C for which every "primitive operation" in the programming language takes less than C seconds. That way your computation of running time corresponds to actual time. In certain circumstances, you might make different assumptions that don't correspond to actual clock-time, depending on your needs.

Edited by Rashakil Fol: n/a

For example...

``````// (sum[j] holds max of all sums ending in A[j])
maxsum := A[1]  // This line is a quick constant-time operation.  It's 'O(1)'
create array sum[n] of n size  // This line could be "slow" but can be considered 'O(1)' or 'O(C)' (where 'C' is some constant)
sum[1] = A[1]  // again 'O(1)'

for(j := 2; j ≤ n; j++) // Here's where things get interesting:  This loop executes 'n-1' times.
sum[j] := max(sum[j-1] + A[j], A[j])  // These are all constant-time operations.

if(sum[j] > maxsum) // again; constant time
then maxsum := sum[j]; maxAver:=maxsum/(j-i+1)  // more constant-time operations

return maxAver  // one trivial operation``````

So the time this code takes to execute is '1 + C1 + (n-1)*(C2+2+C3) + 1' or so. If 'n' is a large number, '(n-1) times some constant' is all that matters. Constants and small numbers don't matter. Neither does "plus or minus one" (or +/- any constant).

So the code is 'O(N)'.

This means that as "N" becomes large, the execution time of this code will grow essentially linearly. (Generally, that's a good thing. ;-)

(Note that for small values of N, "order" notation can be misleading. A large constant setup time doesn't matter if you're processing millions of records, but can make a big difference if you're only doing ten.)

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.