Hi,
i have seen a lot of ppl say things like, 'this piece of code is more efficient than the other', 'this gives better performance' or 'more efficient memory wise' etc.. but i fail to understand that how in a normal coding scenario can i find out the efficiency of a code? .. i'm not talking about the very obvious scenario's which are already written down in books or journals.

r there any techniques to find out the performance/efficiency/memory map of a c++ program?


thanks
chandra

Recommended Answers

All 9 Replies

First of all the performance for a piece of code can be measured by the time it takes and the memory it cosumes. I've been working on code optimization and memory management for over 8 months now and i've come up various scenarios how performance can be improved.

Myth

  • Memory( RAM ) operations are fast.

Its only true to an extent. I have come in situations where having data in my hard drive and accessing it is much faster. Having huge data in RAM will result in more page faults and thus reducing performance.

So the trick is to use the right data structure. Believe me having the right data structure will improve the efficiency of your code by a huge margin.

First of all the performance for a piece of code can be measured by the time it takes and the memory it cosumes. I've been working on code optimization and memory management for over 8 months now and i've come up various scenarios how performance can be improved.

Myth

  • Memory( RAM ) operations are fast.

Its only true to an extent. I have come in situations where having data in my hard drive and accessing it is much faster. Having huge data in RAM will result in more page faults and thus reducing performance.

So the trick is to use the right data structure. Believe me having the right data structure will improve the efficiency of your code by a huge margin.

Time it takes?? so u mean i should probably start and stop a timer to check which particular approach takes less time, assuming that the time difference is too small to b noticiable manually..? can u give me some references that u've been using? How do i test which data structure is proving most efficient for my application?

If you have Visual Studio Team System (the expensive edition) then there is an excellent code profiler built into it.

The code profiler will tell you how long each and ever function in your application took to execute. You could do the same thing by hand by starting and stopping high resolution timers, but it would be tedious.

There is no one solution or data structure to make your code faster. It takes some thought, elegant design and a reasonable understanding of computer science in general. The code profiler will tell you which functions are taking all the execution time and thus where to spend your effort doing optimisations.

You are right.. the best way to measure time is to have a timer.
Dont do it manually..... Do it programatically...

// Measuring performance
	clock_t commence,complete;
	commence=clock();
    // Code for which you want to measure

       complete=clock();
	LONG lTime=(complete-commence);

The output will be in milliseconds.

If a piece of code takes 100 milliseconds to execute in one method but if another method takes 50 milliseconds then obviously the second method is better. For a normal guy the diff btw 100 and 50 ms is small but for a developer its huge. Every milliseconds counts.

Regarding which data structure is more efficient.... im so sorry my friend its mostly trial & error and programming skill and how well you understand the language.

yeah.. i can do this ...

and thanks vijayan for the links .. i'm going through them...

>>The output will be in milliseconds.
But don't be supprised when the result is 0, which means the code runs so fast that it isn't measurable. In that case you may have to run the code several (hundreds or thousands) times between timings, probably in a loop.

> The output will be in milliseconds.
the output will be in number of clock ticks.

(complete-commence)*1000.0 / CLOCKS_PER_SEC would give milliseconds.

Really surprised that no answers discuss Time Complexity (big-O) analysis of code. Although this question is old, folks still find it so let me share this advice. Before you even use profilers or timers or other tools to see how fast your code executes, take some time to learn about time complexity analysis. There's also a space complexity corollary to consider how much memory space a problem requires.

https://en.wikipedia.org/wiki/Time_complexity

Regardless of how fast your host or memory or other relevant parts of your infrastructure are, you can make some logical assumptions about performance using time complexity analysis.

If for example, you have an array that you are using programmatically, with a length "n", you can imagine that comparing every item to every other item in the array requires "n^2" sets of operations. In time complexity, that's referred to as "quadratic" time complexity.

On the other hand, if you can find an alternative solution that allows you to iterate once through the array (n sets of operations -- called Linear Time), storing intermediate data into a separate data structure like an object in Javascript (or a hash in other languages) and then iterate once through the keys in the hash to learn something about their values, you have reduced the entire thing to two Linear Time operations (n operations + some number less than n operations). It's even better if you know the key you want and can immediately get your answer (Constant Time)

By definition, two Linear Time operations (or one Linear and one Constant) are a more performant solution than the original Quadratic Time one.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.