The project that I'm currently working on is for an large embedded application with a multi-person team. During a recent review of the code I came across the following statement

bSlcIsolated[logical-1] = (!!p_bIsolated);

The interesting part is the double inversion

!!p_bIsolated

The register being double inverted, is a private byte within the class. This is just an example, but the developer has used this convention through-out his code, as well as other projects.

With unoptimized code, the compiler generates code which inverts the bit twice. i.e. 01h --> 0FEh --> 01h.

With optimized code, the compiler removes the double inversion.

We can't use the optimizer (managerial decision, because people don't know how to use volatile, a different discussion). This is the first time I've ever seen this and I'm interested in any reasoning why we shouldn't just remove it to eliminate code bloat.

Before you ask I asked the developer, he was very vague, indicating it guaranteed the state, blah blah. But I do know he found it in a code example he downloaded from the web a few years ago, and has used it everywhere since.

Recommended Answers

All 14 Replies

We can't use the optimizer (managerial decision, because people don't know how to use volatile, a different discussion). This is the first time I've ever seen this and I'm interested in any reasoning why we shouldn't just remove it to eliminate code bloat.

It sounds like you might be doing some parallel work here, perhaps. "Guaranteeing" the state of a register also sounds like some sort of a hack to avoid a race condition, perhaps. In serial code, the !!someVar should have no logical effect (especially in the case of native types where, regardless of the type, it's underlying bits are just getting flipped twice). For some reason, this idea is ringing a bell, though I can't place it.

Still, it seems like a hack no matter what. Ensuring thread safety should always be done explicitly using thread-safe constructs provided by the language for that specific purpose.

I would suggest that you create some unit tests for a piece of code using this convention and assure yourself that double negating a byte is indeed semantically null.

>This is the first time I've ever seen this and I'm interested in any
>reasoning why we shouldn't just remove it to eliminate code bloat.

I can't claim to know what the original author was thinking, but I'm a fan of the double inversion for forcing an integer into a strict boolean 0 or 1.

As Narue implies, the !! is 'converting' the integer value to boolean.

The !x equates to true/false. The additional ! reverses the value. So if x = 23 (which is an implied true), !!x is 1 (strict true).

If you need a strict true, it's obviously useful, but if b is a boolean type, b = x; seems to accomplish the same thing.

p_bIsolated != 0 would seem to be a lot less obscure, whilst meaning about the same thing to the compiler.

Thanks for the interest guy's, according to K&C a non-zero value is converted to 0, and a 0 is converted to 1. However a colleague who writes compilers for a living said, yes that interpretation is true, however when using the double inversion to set a value that is not defined and its interpretation is open to the compiler writer. So based on which compiler you use your basically relying on a quirk of the compiler to get the operation you expect. The problem will occur when you move to a compiler that interprets this in a different manner.

In a nut shell it'll generate more compiled code, and may not have the behavior expected.

It seems to me your compiler writing friend is a writer of broken compilers. Please tell us the names of some of the products they have worked on, so we can avoid them in future.

Negation is well defined, so applying it twice isn't going to make it suddenly undefined.

If a compiler generates anything other that 0 or 1 from !!x, then it's broken.

Thanks for setting the record straight Naruto, Salem, and WaltP.

I was thinking of bitwise negation, not boolean negation.

Is the double !! any more efficient than, say, impliedTrue && true or, as Salem suggested, impliedTrue != 0 . The last implementation seems to be the most obvious and readable.

Equivalent statements should not be different in efficiency. (And I'd avoid the premature optimization thing here anyway.)

[ Footnote : I find the !! easier to understand than its equivalent. Just part of the code I saw while learning, I guess, for better for worse.]

It seems to me your compiler writing friend is a writer of broken compilers. Please tell us the names of some of the products they have worked on, so we can avoid them in future.

Negation is well defined, so applying it twice isn't going to make it suddenly undefined.

If a compiler generates anything other that 0 or 1 from !!x, then it's broken.

Thanks for your input, but to set the record straight, using !! is unspecified behavior i.e. not covered in the ANSI standard, therefore it is open to the compiler writer's interpretation. So the fact that it works in the manner described for some compilers is a quirk of that compiler.

Thanks for your input, but to set the record straight, using !! is unspecified behavior i.e. not covered in the ANSI standard, therefore it is open to the compiler writer's interpretation. So the fact that it works in the manner described for some compilers is a quirk of that compiler.

So your assertion is that the ! operator has unspecified behavior. This would not be a conforming compiler. To me, this seems rather clear:

The result of the logical negation operator ! is 0 if the value of its operand compares unequal to 0, 1 if the value of its operand compares equal to 0. The result has type int. The expression !E is equivalent to (0==E).

So your assertion is that the ! operator has unspecified behavior. This would not be a conforming compiler. To me, this seems rather clear:

From what I understand your talking about the assertion that if
x = 1, then !x = 0, and x = 0, then !x = 1. This is defined in the context of a logical condition, but not as an assignment so while it seems logical that x = !!a, would result in either a 0 or 1, this behavior is not defined by the ANSI standard, and many compiler manufacturers (embedded variety) would resolve the logical NOT into its simplest component (something to do with how they manipulate the register by left shifting for the first !, then right shifting for the second ! and utilizing the zero and sign flags on the processor, he did explain it but it was over my head).

I'm not saying this is ideal, but it the case of the compiler we are using (IAR for ARM, which does have some interesting quirks of its own) it is dangerous to assume that because unspecified behavior works favorably on one compiler it will on another.

From what I understand your talking about the assertion that if
x = 1, then !x = 0, and x = 0, then !x = 1. This is defined in the context of a logical condition, but not as an assignment so while it seems logical that x = !!a, would result in either a 0 or 1, this behavior is not defined by the ANSI standard,

So now the assignment of a 1 or a 0 is not defined by the standard? :icon_rolleyes:

Reread what I quoted. Apply it twice.

---

What happens under the hood is up to the implementer, but to be standard compliant compiler, it most definitely does this stuff in a well-defined manner.

[edit]Moved half of the earlier quote here:

and many compiler manufacturers (embedded variety) would resolve the logical NOT into its simplest component (something to do with how they manipulate the register by left shifting for the first !, then right shifting for the second ! and utilizing the zero and sign flags on the processor, he did explain it but it was over my head).

While these details can be done, a conforming compiler needs to be correct first and fancy second. You are telling us your implementer friend has that backwards.

> but it the case of the compiler we are using (IAR for ARM, which does have some interesting quirks of its own)
> it is dangerous to assume that because unspecified behavior works favorably on one compiler it will on another.
You need to differentiate between "undefined behaviour" and "buggy compiler".

If it were truly "undefined behaviour", then using it with ANY compiler would be foolish, no matter how "favourably" the outcome. Just like the popularity with the "works for me" crowd when it comes to fflush(stdin).

!! remains well defined, even if your current compiler makes a mess of it. Extrapolating that to "all compilers" is just plain wrong. If you want to avoid the notation on your project, then that's entirely up to you. But if you go round saying "!! is undefined", then you're going to get rebuttals.

It's the C standard which ultimately states what is (or isn't) defined - you should read it sometime.
Not ad-hoc observations based on looking at what one particular compiler is doing.

Embedded compilers are notoriously buggy compared to your average PC compiler (I've found a good few compiler bugs in my time).

> Please tell us the names of some of the products they have worked on, so we can avoid them in future.
Still waiting for that list of compilers' written by your 'friend' doddware.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.