What are some hints or tips on writing efficient functions? I read about converting the code to assembly, and I've also read about the stack and functions. However, I fail to understand how data stored higher can be accessed before the end on a FIFO stack, as any local variable can be accessed at any point in scope. Obviously I'm misunderstanding something.

Recommended Answers

All 7 Replies

Step 1. Understand the problem thoroughly.

Step 2. Realize that I was really serious about step 1, and go back and understand the problem even more thoroughly.

Step 3. Write the clearest, most straightforward program you can that solves the problem correctly.

Step 4. Measure your program's performance. If you're satisfied with the results of your measurements, you're done.

Step 5. Your measurements will tell you what part(s) of your program are spending the most time. Rewrite those parts to make them faster. The most effective rewriting strategy will often be to find a way to avoid computing the same result more than once, or to find a more efficient algorithm. Now go back to step 4.

My experience is that many programmers will omit one or more of these steps in an effort to get the job done more quickly. Most of the time, doing so will have the opposite effect from what you intended.

commented: Good advice. +11

Are there any steps I can take while coding any function that will make it better? Is functional programming better? Or is it just safer in some environments. Is a recursive function faster than a few loops? Should I use new variables when passing values around or should I stick with pointers. There's probably no easy answer to these questions, but I'm hoping I'm wrong about that.

Are there any steps I can take while coding any function that will make it better?

Make it as simple as possible. That makes it easier to measure, and also easier to optimize the hot spots that the measurement reveals.

Just some tips:

1) Make your program first, make it reasonable and clear and readable
2) Then run it and see if it meets your program requirements
3) If not then profile it with a profiler and fix the hotspots

Your question is too vague and I think arkoenig and firstPerson's answers are quite good instructions for the "generic" small-code performance testing and optimization.

Overall, people usually talk about three levels of optimization: macro-, micro-, and nano-optimization. You want to proceed through them in that order. Macro-optimization is about defining the software architecture that properly splits memory and processing, this is done before you program a single line of code! This is like the first and second step that arkoenig posted. It is by far the most important because it is the hardest to fix and the hardest to measure. You have to really make sure you understand the problem you are trying to solve and that you are able to divide the problem into subproblems that are as independent from each other as possible. When your input/outputs are well attributed and thought-out for each subproblem, and that you managed to get a good, clean separation of memory between the subproblems (memory locality is a big factor in running speed), then the software architecture (what classes or functions you will make and in which libraries they will be put) follows quite naturally. If you screw up at this stage, you are in trouble, because down the line, if you find it to be inefficient, you might have to start the whole thing from scratch.

The second step is micro-optimization. Once you have coded all your functions and classes in the most straight-forward way that works correctly, then you can assess the performance, via a profiler, and see if you need to make anything faster. If you do, you're in micro-optimization. This includes both the "algorithmic" optimization, i.e. making the theoretical algorithmic complexity better via more clever algorithms, and the "computational" optimization, i.e. merging for-loops, reducing branching, avoiding deep recursions, etc. They are both somewhat similar but in the former, you work sort-of on pen and paper, and in the latter, you work with the code directly. A good exercise on this is to turn a basic Gaussian Elimination algorithm into a LU-decomposition algorithm, if you start with the code of the Gaussian Elim., you can optimize the loops and branches up to the point that you now have a LU-decomp. algorithm.

The last step, which is only if you have a very localized bottleneck, like a few simple functions that are called an enormous amount of time and you think they are slower than they can be. Then, nano-optimization will involve inspecting the assembly code that the compiler produces for your function(s) and if it is not to your satisfaction (which is kind of hard to know if you are not an expert already), you can resort to inlined assembly code and to platform specific assembly instructions to make it faster. Cache optimization is also possible, that is, splitting large loops such that the cache does not need to be swapped at each iteration. But, remember, nano-optimization is hard and tedious (and requires serious expert knowledge), do not resort to it before everything else fails to provide you with the performance you need (for instance, I have never really needed it but I have a shallow knowledge of it, just to help with small coding decisions).

Also just to point out again, doing premature optimization is bad. It will make your code non readable and unclear and just plain bad. So start optimization if you need to after your done with the program, and NOT DURING.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.