Came across this showbits() function in the book im using to learn C. None of the compilers recognize it. The actual code for the function is given in the book as well.
I tried to combine the two by putting function's code in header file. I have been unsucessful so far as I don't quite understand bitwise operators. Please help me compile this if you know what is going on in this function's code.
Thanks

/* prog 2.1 */
#include <stdio.h>
#include "prog2.1.h"

int main()
{
    unsigned char j;

    for (j=0; j<=5; j++)
    {
        printf ("\nDecimal %d is same as ",j);
        showbits(j);
    }
    return 0;
}
/* prog2.1.h */
void showbits (unsigned char n)
{
	unsigned char i,k, andmask;

	for (i = 7; i >= 0; i--)
	{
		andmask = i << 1;
		k = n & andmask;

		k == 0? printf ("0"):printf ("1");
	}
}

Try something like below and it worked for my system.

#include <stdio.h>

void showbits (unsigned char n)
{
	unsigned char i = 8;

	for (; i > 0; i--)
	{
		n & (1 << (i - 1))? printf ("1"):printf ("0");
	}
}

int main()
{
    unsigned char j = 56;
	showbits(j);
    return 0;
}

Came across this showbits() function in the book im using to learn C.

The code from your book is broken. i is an unsigned type, which means that it will never be less than 0. Thus the loop is infinite. You can turn i into a signed type such as int , but that still doesn't solve all of the problems because the loop doesn't calculate the bits properly ( i << 1 in particular should be 1U << i ).

showbits() can be corrected like so:

#include <stdio.h>

void showbits (unsigned char n)
{
    unsigned char k, andmask;
    int i;

    for (i = 7; i >= 0; i--) {
        andmask = 1U << i;
        k = n & andmask;

        k == 0? printf ("0"):printf ("1");
    }
}

But the need for both a type change and a fundamental algorithm modification calls into question the quality of your book.

This article has been dead for over six months. Start a new discussion instead.