One of the best tips I learned long ago was to turn on all warnings and error messages from the compiler or development system.

Yet last week, once again a big system was being worked on and the bug was tracked down to uninitialized variable.

Now this isn't specific to last weeks bug hunt but it amazes me how much [s]time[/s] money is lost over what our tools can report to us for free. I've seen projects invest heavily in code analysis tools yet miss on the basics. Here's a few project manager replies why they don't look for compiler warnings.

"The code runs fine. We test it thoroughly."
"It would cost too much to fix those."
"It's only a warning. Won't hurt."

How about you? Got any good replies like that?

Recommended Answers

All 11 Replies

My (former) boss never understood the concept of "technical debt".

I guess you know the story of the ArIane 5 rocket? Cost a gazillion Euros and blew up soon after launch.
The guidance software had an assignment of a 64 bit number to a 16 bit target. Of course the compiler objected, but management reviewed the problem and decided the value would always be within a 16 bit range, so don't bother to fix it. Wrong.
I use that story a lot.

It should be forbidden that a compiler had an option to turn warnings off.
Some warning might be ignored after careful consideration.

@ddandbe. Wish it was that simple. As to careful consideration, read James gazillion dollar example.

How about you? Got any good replies like that?

The worst is when people assume that because the code compiles (ie. no fatal errors) then it's correct. Oddly enough, warnings often tell the real story about issues at runtime.

There's a bit more to the Ariane story.
The software was originally written for the Ariane 4 rocket, where it worked perfectly. Then they built the 5 which is bigger heavier faster, reused the 4 software, and that's when the overflow happened.
There's another good moral here: requirements and specs are never final, and once working software lives on far longer than anyone originally intended. "It works now" isn't good enough.

Then, there's also some martian satelites where doomed due to a malfunctioning of interprogram communication.
One program calculated the force in pounds perfectly, passing it to a program that expected this data to be in newton.
Read here.
No warnings could ever have helped here. But I suppose this will never happen again in future missions.

In some cases, you want to escalate warnings to errors, or at least warnings over a certain level. Most compilers can do this. If you are building safety/mission-critical systems that should be required. Then there was the missing semi-colon that caused a satellite to crash into the sun instead of orbiting it... Oops!

@ddanbe - how do you convert newton-feet to furlongs-in-a-fortnight? :-)

But I suppose this will never happen again in future missions.

I wish... I know from good sources that this kind of thing happens again and again about a thousand times for every mission of the European Space Agency. Some are caught before the launch, but most problems are fixed on the fly (literally) by uploading patches. Space agencies, and most notably ESA, simply do not know how to create good software, they engineer the heck out of hardware, but treat software as an afterthought. NASA JPL is pretty much the only exception to this rule (because they treat software as a first-class citizen), but they are not perfect either.

Some warning might be ignored after careful consideration.

I always recommend to do whatever contortion you need to do to please the compiler and quiet down the warning that way. I've seen software spewing tons of warnings about truly harmless code, but in the process, hiding a few very important warnings that should not be ignored and would not have been ignored if they were not being drowned in a sea of harmless warnings. And on the converse, I have seen code in which specific warnings are disabled in specific places (using pragmas) after "careful consideration", which turned out to hide specific undocumented assumptions (e.g., running on an x86 platform, or that a class doesn't have a virtual table) that are easily broken by accident in future iterations of the code or project.

Strict aliasing rules is a recurring example because people just keep on disabling the rules or the warnings, because (1) they are such a pain to work around (as opposed to most other warnings) and (2) people see a performance benefit to violating them. But it pretty much makes that entire code completely immutable (cannot make any changes without risking breaking everything) and unmovable (cannot port the code to a different platform than what it was originally written for). A lot of large applications written for 32bit systems could never (or very slowly) be ported to 64bit for exactly this reason.

I've seen projects invest heavily in code analysis tools yet miss on the basics.

Because it's easier to buy the license to a tool than to instill a culture that encourages the development of quality software. The former achieves nothing without the latter. For instance, I recently informed a colleague that our new code analysis tool revealed over 500 potential problems in his code, to which I expected he would be very happy to hear (as were most other people I reported problems to), but instead got an angry and defensive stance followed by an inquisition about how many bugs the tool had found in my code, to which I replied that there were 4, that were all real bugs and are all fixed now. This is all about people's attitude, not the tools.

commented: I could not agree more. Good post. +13
commented: Thanks for this. Love the "care consideration" answers. +7

A software will never be perfect but today it could be acceptable. For tomorrow we must check it again.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.