Code 1 :
        for(i=0; i<1000; i++)
                for(j=0; j<100; j++)
                        x = y;

Code 2 :
        for(i=0; i<100; i++)
                for(j=0; j<1000; j++)
                        x = y;


Depends upon the optimizer - both execute 100,000 assignments in the end. If the optimizer is good, it will detect that you are assigning x from y without any computation, so it would optimize out the loops and just assign y to x once.

commented: Great irony. =) +4

no! nothing is given. muliple chices are there:

a) code 1
b) code 2
c) code 1 = code 2
d) none of these

Ignoring optimization, I would say code 2 is minutely faster because it doesn't have to initialize the inner loop as often as code 1. Other than that, they are both the same.

You know there's only one way to know for sure which, on a given system, will run faster. Build them, time them.

nopes,..... this is a well-defined question. we have to tell everything without running anywhere with proper and well-reasoned answer. if i run , code 1 is faster, still why ? thanks.

this is a well-defined question.

About a fictional machine that does exist. So we might as well say that is runs in zero time, or it takes forever, or the answer is beans on toast. A compiler might turn them both into this:


in which case they're identical code, or it might optimise one loop better than the other, or something else or something else or something else.

Well-defined question? Well then where is the rest of the question that perfectly describes the machine to use and the compiler that will make the program?

Again, it comes down to compiler optimizations. To really see which is faster, you need to disable ALL optimizations, either via a command-line argument, or an IDE configuration option.

Obviously Code 2 will run fast, the reason being is inner for will run for 100,000 iterations in both codes, its the outer for loop which runs 1000 times in code 1 and 100 times in code 2 therefore code 2 is 10 times faster

Are nested for-loops commutative or not? Is there a proof for it?
Like integer multiplication and addition are commutative, 23 = 32 and 3+2 = 2+3

Ah! The black boxes of compilers and optimizers! In theory, the inner loop is executed 100K times, in either case. How the compiler performs loop optimizations is the biggest issue here. In theory, the one where the outer loop is only executed 100 times should be faster (fewer initializations of the inner loop), but as nitin1 indicated, the first one was actually faster. I stand by my contention that it was a compiler optimization side-effect, and the only way to tell if one is faster than the other is to disable all optimizations. I suspect that they should, in such a case, run identically! Or close enough over a number of runs to be such... :-)