how you would compute/denote the time complexity of two loops based on different variables?
I understand how to compute the time complexity for a simple loop

PseduoCode

while X<N
{
while Y<M
{
Z++

}
X++
}

one loop occurs N times - time complexity O(N)
the other loop occurs M - time complexity 0(M)
M- happens to be a unknown function of N

would the time complexity O(MN)?
but In big O notation you are to drop all excess variables
in which case time complexity would be O(N)?

Think about it logically. You have O(N) executions of loop N and N executions of loop M. I would think that the complexity then would be O((N^2)M) of course I do not really know much about big-O notation (I just self-taught myself it) does anybody know if O((N^2)M) is right, or any good resources for learning big-O notation?

That means, M = f(N), and you can't assume anything about f(N), so the runtime would be O(N * f(N)).
That could be linear, polynomial, quadratice, factorial, or whatever; all depending on f(N)

That means, M = f(N), and you can't assume anything about f(N), so the runtime would be O(N * f(N)).
That could be linear, polynomial, quadratice, factorial, or whatever; all depending on f(N)

thank you! I don't have any formal training in time complexity and all the examples online only show how to denote a single variable. Basically I wrote a whole program and I was trying to figure out the time complexity but it had so many variables in it I didnt know how to show it. this should help me finish.

My only question now is, is there any real point to time complexity? I mean granted it helps show how quick or slow an algorithm is but it not it terms of iterations in relations to but they dont fully represent the correct time. Take sort by insertion it time complexity is averages 0(N^2) however, if you are dealing with mostly sorted data, which most programs are it would be (N*D) D-being the number of things need to be inserted. So is there really a point?

I believe the point really lies in the comparison between the different big-O values. For example take the problem of sorting a dictionary. Taking a look at two common algorithms, insertion sort and merge sort we want to decide which one to use. Insertion sort's best case scenario is O(n) as opposed to O(nlogn) of merge sort. For large arrays then Insertion sort will be faster in the best case scenario. But looking at worst case we get insertion sort with O(n*n) and merge sort with O(nlogn) now we see that for large arrays merge will be better in the worst case. The question is which to use? If for example the user is going to enter each element individually, an insertion sort will be better because at any step the array will already be mostly sorted, but if we load the dictionary from a file we get all the data at once and then, since the data could be completely unsorted, we would want a merge sort. Basically big-O notation not only helps to bench-mark an algorithm for bragging rights, but also gives you information on when it is most appropriate.

thank you! I don't have any formal training in time complexity and all the examples online only show how to denote a single variable. Basically I wrote a whole program and I was trying to figure out the time complexity but it had so many variables in it I didnt know how to show it. this should help me finish.

My only question now is, is there any real point to time complexity? I mean granted it helps show how quick or slow an algorithm is but it not it terms of iterations in relations to but they dont fully represent the correct time. Take sort by insertion it time complexity is averages 0(N^2) however, if you are dealing with mostly sorted data, which most programs are it would be (N*D) D-being the number of things need to be inserted. So is there really a point?

Big-O allows one to compare between different algorithms. That is the point you should get out of this. How else would you cleanly compare bubble-sort against merge-sort?