Question of the century? I basically want to know which would be more efficient if I wrote this code as several different variables or if I used small arrays.

int x = 34;
    int y = 28;
    int z = 293;

vs

double coordinate[3] = {34, 28, 293};

I have a coordinate struct which I will use in the following way:

typedef struct coordinates_t {
        double x = 0.0;
        double y = 0.0;
        double z = 0.0;

    } coordinates;


    typedef struct car_t {
        coordinates start; // car starting point
        coordinates location; // car current Location
        coordinates targCarVector; // Vector to car from target
        coordinates altitude; // Altitude of car
        coordinates distance; // Distance from car start to current position
    } car;

I'll need to do things like:

distance = car1.location - car1.start;

If I don't use an array I'll have to have a lot of lines of code, but if I use an array I'll have to use loops. Are arrays and loops more memory/cpu intensive? I am basically trying to see which is the most efficient way of writing this code.

Thanks,
DemiSheep

Recommended Answers

All 5 Replies

>I am basically trying to see which is the most efficient way of writing this code.
At this point I'd say that trying to micromanage your memory usage and CPU cycles is unproductive. Both options are extremely likely to be fast enough for your purposes, so it doesn't matter which is faster. Don't excessively optimize without good reason.

You need to use ARRAYS of structs, of course - if you have multiple cars to track. ;)

For your case, offcourse arrays will be preferred not because of efficiency but in design point of view.
Proper use of data structures increases efficiency as well as maintainability of the code.

Don't excessively optimize without good reason.

This, or "Premature optimization is the root of all evil."

My advice is to optimize for programmer time, not for run time. A simple change to the program might take you fifteen minutes to an hour. A major change to a serious program might take days or weeks. The time-to-kill of a typical bug depends mostly on how well the code was written to begin with, but it can be an hour, a day, or infinite. Most programs with a defined a to z computation path run in time that is measured in fractions of a second, or in minutes for a really intensive piece of computation.

The upshot of all of this is that optimising for running time is very seldom going to make any difference to you. If you are handling a lot of data or doing very intense work on a moderate amount of data and rendering the results to a screen in real time, while responding to user input - in that case, optimizing for run time is very important. If you're running a life universe of 1000X1000 cells or more, you want to be smart about how you calculate your live and dead cells, or you'll notice some lag. If you're trying move objects around a screen in a lifelike fashion while your user is pretending to shoot at them, run-time optimizing is probably worth doing.

These applications are very specialized, and you'll know them when you come to them, and most of the time it just doesn't matter whether you save a cycle here or there. What does matter is whether the person trying to change the program in six months (who is likely to be you) can figure out what it was you meant by that clever trick - that's optimizing for programming time.

This is not to say that you should write bad code and excuse it by saying it doesn't matter - it's worth looking for a good algorithm and implementing it well - but that's a matter of craftsmanship, not tuning.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.